Remembering Things With Computers

This article is part of a series, access the series index.

Computer Memory

So we’ve learned about binary and boolean operations along with logic gates. None of these, particularly logic gates, can do the second most important thing necessary for computation.

That is the ability to remember.

This is where the ‘on’ and ‘off’ of binary comes into play. In order for there to be any form of memory you must have some kind of mechanism that can occupy and transmit more than one kind of ‘state’. States are some characteristic or aspect that is distinguishable from another. For instance, a light bulb has three ‘states’: on (lit), off (dark), and broken. Of course we can’t compute broken, so we just stick with on and off.

Now think about binary. This analogy should begin to illuminate the connection between binary and computer hardware. As mentioned previously, logic gates are the building blocks of computational hardware, acting on a bunch of ‘lightbulbs.’ This sea of ‘lightbulbs’ is what we call ‘memory.’

The electrical engineering term for the memory containers found in modern computers is ‘relay.’ A relay is a switch (like one for a lightbulb) that is flipped by electricity for the purpose of controlling electricity. So what are these switches? Modern computers use relays made out of silicon transistors. You don’t have to know exactly how a ‘field-effect transistor’ works to take advantage of it. However, for the sake of completeness I’ve provided a diagram of a memory cell (a transistor and all its supporting components) as a little black box with inputs and outputs.

blackbox

If you put voltage (electricity) on the input pin and the select pin, voltage will manifest and remain on the output pin. The output pin will remain this way as long as there is a voltage in the input pin as tempered by the select pin.

Accessing Memories

There are a myriad number of devices designed to store information by using different ‘states.’ On a compact disk for instance you can store a series/string of bits on reflective materials, on a historical diskette and in most modern hard drives you store electrical charges onto a sort of disk to be read by a ‘needle’. These examples are examples of ‘serial devices’ for storing data. The mechanism above, due to its nature, is applied in the context of an electrical circuit and is not meant for long term storage.

To store anything of value besides ‘yes’ and ‘no’ we need to have multiple memory cells. The addition of one memory cell to a group of memory cells automatically doubles the number of different overall states, and thus storage space. This can be illustrated by comparing the binary 10 to the binary 100. Remember, each binary digit represents a transistor/memory cell. The first binary sequence has 2 bits (a maximum of 4 different states) and is the decimal number 2. The second binary sequence has 3 bits (a maximum of 8 different states) and is the decimal number 4. See how the maximum number of individual combinations of ones and zeros double when only one bit is added?

How can we access multiple memory cells to get their states, such as 100 or 10? This is where the concept of ‘memory addresses’ comes into play. Each bit in a computer has a specific address, whether you have to specify what chip, address sets of chips together, get the electrical charge of a disk, or specify the number of a bit in a byte, every bit has a location (numerical address) in the computer. We can look at how this works by examining a simplified memory chip that holds more than one bit of memory.

26_ram_chip

The diagram above represents a memory chip. This chip might hold 1 kilobit, or 1 megabit. Whatever it might hold there is a numerical maximum to the address that is flung at it. In fact, the address pins themselves can sometimes be an indicator of the maximum memory available in the chip: 8 pins, like above, could mean that this memory chip holds 256 different bits. How it works is simple, despite its complex construction. You send voltages down the various address pins. This boolean combination (numerical address) is computed by the internal circuitry of the memory chip to serve up the specific relay and it’s charge (or lack thereof) to the data pin.

Something of this nature, though temporary, is suitable for use as random access memory, rather than serial data storage. All random access means is that the mechanism is ready to use at any given opportunity no matter when or where. Random access is known popularly as the acronym RAM (sounds like the word for a male sheep.)

Accessing memory takes time, not a lot of time, but it does take a small amount of time. For example, a memory chip may take 20 nanoseconds from when the voltages are applied on its address pins to when the voltage appears on its output data pin. A nanosecond is a billionth of a second. Doesn’t seem like much then does it? Computers ought to be operating at lightspeed you’d think. But they don’t, why is that? It turns out that a modern computer can easily access RAM a billion times in a very short period. As anybody whose spending habits are small but prolific at the point of sale the ending dividends can add up to be very large. So as you program in the future, whether it is in assembly (which we will learn first), C++, or PHP, keep in mind that every access to memory slows down the speed of your program. For the most part, unless you’re doing some intense calculations, you won’t notice the miniscule lack of speed, but it is there.

In the general case memory is stored in multiple memory chips strung together so that one address produces a set number of output bits on the data pins which are then assembled into something meaningful. See the diagram below:

27_memory_chips

These diagrams show a computer’s memory chips as only holding one bit of data.  While that may be true for some computers, many modern computers serve up 4, 8, or even more bits per address.  By convention however, every byte has an address in the computer regardless of whether it’s serving 8-, 16-, 32-, or 64-bit.

Measuring The Bit

The data pins on the memory chips are assembled into series of voltages representing some ‘number’ or piece of ‘information’. Going along with the pattern of powers of 2 we find various sizes of information produced from these array of memory chips. There is the nybble which is 4 bits, the byte which is 2 nybbles, a word which is 2 bytes, a double word is 2 words, and a quad word is 2 double words. I’ve assembled a list below:

Nybble
4 bits
Byte
2 Nybbles – 8 Bits
Word
2 Bytes – 16 Bits
Double Word
2 Words – 32 Bits
Quad Word
2 Double Words – 64 Bits

It is rare to see a computing device accessing anything smaller than a byte like a nybble. The term is continuing into obscurity as we know it, so you saw it here folks. This isn’t saying that modern computers today work byte by byte, on the contrary their memory chips may return information on 8 data pins (a whole byte) and their processors may operate on quad words or even higher (double quad words or 128 bits) such as the Playstation 3 with its Cell Processor.

If this is starting to overwhelm you, as I know it would me and I’d be inclined to just stop reading, there is hope. With the help of most modern operating systems combined with the internal circuitry of the computer you rarely will be addressing directly a point in the system. You pretty much end up ‘telling’ the ‘computer’ what address in memory you think you want for your program and the ‘computer’ (operating system, etc.) locates the exact hardware location for you and returns its value. This is important when it comes to programs having their own memory space to access. One program’s address and another program’s address may be the same, but they point to two different places inside the physical computer. That’s the beautiful part: the computer (operating system) takes care of that for us.

The Moral (Gist) of the Story

All in all the point of this article is that every byte ‘housed’ in the computer, whether it’s on a serial device or part of RAM, has its own unique numerical address.

When writing or speaking of these addresses directly they are usually written out in hexadecimal like so: 0x000F. That would access byte 15.

Hope you’ve had fun reading this article and that you learned something useful.

If you appreciate my tutorials please help support me through my Patreon.

photo credit: Sheep feeding via photopin (license)

Liked it? Take a second to support kadar on Patreon!

kadar

I'm just a wunk, trying to enjoy life. I am a cofounder of http//originalpursuitssoc.com/ and I like computers, code, creativity, and friends.

You may also like...

1 Response

  1. August 9, 2017

    […] how a computer worked in general, how to count in binary, and most important to this article how a computer’s memory works, and what are programming languages.  These posts are important to this post because we will touch […]

Leave a Reply

%d bloggers like this: