Doing Stuff With Computers

This article is part of a series, access the series index.

Now that we understand how a computer can “remember” both inside and outside (RAM and e.g. CDs respectively), we can move on to how a computer processes what it “remembers.” While it is useful to remember things it is more useful if we can do stuff with what is “remembered” (information stored in the computer.)

The primary processor is the user, who is directing the computer and its activities, but the primary computational device is called the CPU (central processing unit) and is a chip on the motherboard of the computer. The CPU is behind everything the computer “does”, from the display to the peripherals. Although the mouse or monitor may process their own information, it’s the CPU that directs their activity, as the user thus directs the CPU.

There are several types of CPU’s from different manufacturers such as Motorola, AMD, and Intel. Since each chip has a slightly different architecture (a term we’ll cover later) we are forced simply by scope to pick one for our journey into the world of assembly. It is also important that we decide what operating system (to be explained in a later article) we’re going to be programming under. I have chosen a 32 bit Intel chip running the Linux operating system. If you want to duplicate what is written here than I suggest using Ubuntu or the like.

Being a microchip on a circuitboard the CPU is made up of transistors and pins connecting the CPU to the rest of the computing machine. But what makes up the inside of a CPU? A whole bunch of transistors obviously. So we have to approach it conceptually. The CPU has lots of supporting circuitry to process and store commands and instruction memory. One way to describe or think about these things is to take their most important parts and focus on them primarily. The essentials of a CPU can be summed up in “registers,” and “flags”. Flags are for another article.

Registers make up the workbench of the CPU. They store information in a convenient and easily accessible place, and some store commands or read commands that are then performed by the computer. So you can infer, not all registers are the same, even though all of them store information. Some have specialized purposes.

To get the CPU to “do” anything it has to be instructed. Just like a binary sequence may represent a number or letter to us, that same sequence to a CPU can represent a certain instruction to perform. These sequences, when used as such, are called “machine instructions.” For example, one instruction may be to add two registers together and put the result in the second register.

But from where does the CPU get these instructions? Well, from what… wait for it… memory of course! One of the things that attaches the CPU to the rest of the “computer” are, and this is a simplification, the “address pins” and the “data pins.” A CPU can read data from an address, and even write data to an address (changing the content at the address.) The diagram below may help to illustrate:


The CPU puts an address code on the address pins, and reads the result on the data pins. Likewise the CPU can put an address code on the address pins, then data on the data pins, and that data will be written (recorded) at that address in memory. This is a simplification of what is known as the data bus. The data bus also includes peripherals who have “I/O addresses.” It is how the total computer talks to itself. This of course is still all performed through voltages on the circuit.

In general the CPU is guided by a sequence of instructions it reads from memory. This sequence is known as a “program.” These instructions, no matter how far back we may move conceptually, are what make up a program for the computer to “run.” A computer program, as you can see, is also made up of data, being one’s and zero’s themselves.

Pointers and Ticks

The CPU literally operates like clockwork. There is a subsystem known as the system clock. Through some magic of electromagnetic engineering all of the transistors, et al, inside the CPU synchronize their operations with each “tick” (pulse) of the clock. It used to take, a long time ago ;-), multiple clock “cycles” to complete a single operation, but these days one clock cycle may produce 2 or even 4 or 8 instructions at once in parallel.

Simplified, on each, just about, clock cycle the CPU fetches and executes an instruction from memory. How does it know which instruction to fetch from all the addresses of memory? That’s where the “instruction pointer” comes into play. The instruction pointer “points” (contains the address) to the next instruction to be fetched. Once that instruction is executed, it then holds the address of the next instruction.

Architecture & Design

Even with a 32 bit Intel CPU there are many manufacturers and types. The type of chip we’re using is sometimes referred to as “Intel Compatible x86 CPU.” Just that alone has many compatible chips such as: 8086, 80286, 80386… Pentium Pro, Pentium MMX… Pentium Xeon, Petium Core, etc. Whew! And think, before Apple switched to the Intel processor they had the PowerPC line (a primarily RISC processor). From a programmer’s point of view this only affects us in the form of what registers are available, what flags get flipped, what the ancestory “instruction set” of the processor is, and any subsets such as an arithmetic logic unit. All of these details combined together are referred to as the chips “architecture.” Architecture is also used to speak of the design of the chip itself, but the former definition will be used here.

Many chips are built to be “backwards compatible”, meaning they can still operate using old or previously acceptable instruction sets when compared to the age of the chip. However, chips are not “forward compatible”: a specialized instruction on a newer chip won’t work on an older “generation” of chip.

As advances are made in chip hardware, whether that be 32 bit on 64 bit, or cache size, or any other thing we can generally rest assured that out instructions, sometimes with small modifications, will still run.


A CPU may have a particular “architecture”, it’s family or lineage, but it always has a “microarchitecture.”  The microarchitecture consists of the microscopic nuts, bolts, and electrodes that work in the background to make the architecture work.  Chip designers don’t change the architecture, like adding a register, without good reason.  However, they can change what supports the architecture, the microarchitecture, for the better.

The types of malleable and meaningful things are broad and always specialized.  The broadest and perhaps most important is heat.  CPU’s give off a lot of wasted heat generated by the electricity, to simplify, running through it.  This heat can be a problem since not only is it able to damage fellow components, but also self-destruct.  Heat sinks, thermal gel, and fans help dissipate this, but figuring out how to reduce a chip’s power consumption is an important aspect of “microarchitecting.”

Another effort in the world of microarchitecture is to increase processor thoroughput.  In other words, make it “go faster.”  Ways of making more instructions pass by faster are usually arcane and esoteric.  Most of them involve making the processor and the data bus work faster together.  Some examples of the techniques and technologies involved include some you may have heard before: L1 and L2 cache, hyper pipelining, and more.

Minor changes in a chip’s mircoarchitecture are assigned codenames like “Yonah” or “Katmai.”  Major changes in a chips microarchitecture are assigned codenames like “Core,” or “NetBurst.”

What this all means to you, really, as an above-average programmer (I can dream can’t I?) is… not much.  Because great lengths are taken to preserve a chip’s architecture despite its microarchitecture, you can rest assured that your programs will simply run on any chip in the desired family of architectures.


It’s important to remember that the CPU is made of transistors. Millions of them. An instruction code isn’t a number or character (though it can be represented that way) but a sequence of switch’s states that end up flipping hundreds of thousands of other switches. The CPU is, after all, still a machine.

If you appreciate my tutorials please help support me through my Patreon.

photo credit: Dad’s computer via photopin (license)


I'm just a wunk, trying to enjoy life. I am a cofounder of http// and I like computers, code, creativity, and friends.

You may also like...

Leave a Reply

%d bloggers like this: