Memory Map of an Imperative Program on Most Processors

This is part of a larger series known as “How To Program Anything: Core Rulebook

Preface

Imperative programming isn’t as mysterious or intimidating as it sounds.  Most modern popular programming languages have imperative features, and the average programmer uses imperative constructs all the time.  Imperative programming simply means that we are telling the computer what to do at each step.  In essence, each programming statement we make changes some kind of state, usually memory, within the computer.  Wikipedia distills it down like this:

In computer science, imperative programming is a programming paradigm that uses statements that change a program’s state. In much the same way that the imperative mood in natural languagesexpresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.

Almost all traditional computer hardware, with more focus processors, is imperative.  The processor reads in a series of ones and zeros and interprets it as a command to do something, whether that is to store a value in memory, add two values together, or compare two values against each other.  That is why many programming languages, particularly earlier programming languages such as assembler and C, are also imperative in nature as they are really abstractions of the underlying hardware.

That is why when I talk of a “memory map” of an “imperative program” on most processors, I’m really trying to refer to programs constructed “close to the hardware” such as in a language such as assembler or C.  Most processors and operating systems will process and run programs with this type of memory model, from DOS, Windows, Linux, etc.  Not ALL processors or operating systems do this, but it’s a pretty good bet if you’re programming in something like C, you’re going to run into this type of model.  Now, when I talk of a “memory map” I’m indicating what various portions of computer memory, such as RAM or disk, are used for while the program runs.  Imagine it like a bit of an atlas to a particular land or landscape, I’m saying this is what is located here and what it is used for.  The first place we should stop at is the processor, as it has a few pieces of memory available to it that I consider important to know about.

The Processor

A typical processor these days, built around an imperative model, has a number of what are referred to as registers.  You can think of a register of a processor as a sort of small storage unit hardwired into the processor.  I imagine it kind of like a shelf that you place binary values on and off on.  Registers can take on different sizes and particularly different roles.  There are usually general purpose registers that a program can freely use to perform its operations, but there are also registers that indicate various pieces of overall operation and sometimes are set by the processor itself.  For example, there may be a program counter register which holds the memory address of the next instruction for the processor to pull up.  There may be a stack register that indicates the memory address of the current position on the stack (we’ll get to a stack later), there may be a register that holds various flags or bits that indicate certain conditions are met after certain instructions.  Registers may range from 8 bits long, to 128 bits long or longer, it all depends on the hardware.  The key here is that addressing a register is different than addressing a spot in memory outside of the processor such as RAM.

In fact, often you can only have direct access to the registers in a language such as assembly.  In a usual assembly program, you indicate the registers and portions of the registers with different mnemonics and identifiers.  They don’t have a normal address like a variable might say in RAM.  Registers are the fastest values for a processor to access, seeing as how they are “in” the processor themselves, and usually, incur very little time cost in operation.  There are keywords such as  register in the C language, which allegedly makes a variable accessible in a register, but it all really depends on the compiler.  Any language more “abstract” or higher level than C usually doesn’t allow access to registers at all.

Some processors have caches that like small segments of memory that live just off “to the side” so to speak of the processor internals.  These are like registers and are easier and faster to access than outside memory such as RAM.  Most of the time the processor, and maybe the operating system, takes advantage of these caches themselves in various ways, though some allow programmer access.  In general programming, you aren’t going to be accessing these caches directly unless you’re really trying to crank out as much control and performance out of the processor as possible.

The Four General Regions of Memory

In many operating systems “working memory” of a program may be split between RAM and disk access, though the programmer programming the program (whew) wouldn’t and doesn’t know where the split occurs.  For him, there are four regions of working memory that are safely assumed to be in RAM that an imperative program would take advantage of.  These are the program codeglobal variables/data, the heap, and the stack.  I have drawn up a diagram of how these four areas of memory and how they may relate to each other in the abstract:

Program Code is where the machine code (or object code) of your program resides.  This is where the binary representations of the executable statements are stored that are currently “running”.  If you are using an interpreter as opposed to compiling your programs, the interpreter would be stored and executed from this area, taking your program in as input.  If, however, you compiled your program down to machine code, all the machine code instructions would be lined up here for the processor to go over.  It is possible to store data in the program code section, typically done after the program code itself, but it’s not exactly the soundest method of data storage.  There are many alternatives to data storage, mostly using separate files.

Global Variables/Data is where all the values of your global variables are stored, or global data such as the case may be.  This is important because a global variable stored here will have a fixed set memory address.  The reason it’s important to have a separate area for global variables and data is that, imagine if the value of a global variable moved around in memory.  At one moment it was at this address, but then something happened in the program and it was moved to another address.  How would any of the other parts of the program know where to access the global variable from then on?  You’d have to have a fixed location storing the address of this floating global variable that every part of the program could rely on… and if you’re going to have that, why not just store the global variable in that fixed location?  Thus, you have the global variables/data area of the memory map.

I’m going to go a bit out of order and first talk about the Stack.  The stack is a very well-known traditional abstract data type of computer science (I even do an implementation in my phabstractic library).  Basically, the idea is that you can put a value “on” a stack, and then later take that value “off” the stack.  In essence, you put values into the stack, one “on top” of each other, and then you pop each one off the stack in reverse order to get their values back.  This is a very handy construction for various reasons, one of which I’ll elaborate, but first we need to know what kinds of values are stored on the stack.  The stack is used in many imperative programs for a wide array of things including storing the return addresses of function calls, the values of arguments to functions, function’s local variables, and even the current state of the CPU if necessary.  If you’re unsure of what functions, arguments and local variables are please refer to my Programming Crash Course (Bootstrap Part 1) article.  Now, the reason it’s important the stack works the way it does is because of how functions or subroutines (in assembly) work.

Imagine you’re in the main program, and have a set of variables.  Then you call a function with some values. Well, you want to remember what your local variables at the moment are when you get back from the function/subroutine right?  So, place them on the stack, and then when the function/subroutine returns, pop them off the stack.  Okay, now you’re in that function/subroutine… but you call another function/subroutine.  Same deal, store your local variables and arguments and such on the stack, placing them on top of the previous variables.  When that new function/subroutine gets back, just pop off your variables… when you return from the THAT function, you pop off the main programs variables and voila.  In essence, the stack allows you to nest function calls on top of function calls and such, so that, when you start going in reverse, you can retrieve them all back.

As you wind your way further into a program, the stack will grow and shrink accordingly.  Here’s where the line and the dotted separator come in for the heap.  Often times as the stack grows it’ll grow up/down into the space that the heap (we’re getting to it) takes up.  If the stack runs out of space or interferes with the heap we have what we call a stack overflow.  This can happen if we call a lot of functions inside functions inside functions, etc.  This is particularly a problem in what are known as recursive functions, or in essence functions that call themselves, placing information on the stack each time.

The Heap is the area of memory given to the program for the rest of its volatile data.  If you need to initialize a block of memory, say for an array or an image, you would call your language’s memory allocation/deallocation functions to reserve some space in this area of memory and load all that data in.  It often is what holds the data for large and complex values of variables such as, as said, images, or sound files.  It also holds smaller values too, though.  In some operating systems the heap gets moved around and shifted to accommodate various values unbeknownst to the programmer, who doesn’t have to worry about such things.  However, as noted above, it’s possible to run out of heap space and run into the stack.  Rather than being a “heap overflow” this is often just called running out of memory.

Conclusion

This isn’t THEE only memory map of every program everywhere, but it is generally the memory map for most programs most average programmers create, being imperative.  In programming languages like assembly or C, you work with this map pretty directly, but in more abstract or higher-level languages such as Python or PHP, many of these memory issues and placements are taken care of for you by the interpreter or compiler.  That’s the beauty of continual abstraction, you can use lower-level languages to take care of various tedious issues, like memory management, in an automatic way enabling you to move onto more heady issues such as, how does this crazy program work!?  However, no matter what level or focus of programming you’re working on, it’s important to know the memory issues and landscape that may be affecting your program, and I hope this article helped elaborate those issues today, such as the stack overflow.  Thanks for reading!

This is part of a larger series known as “How To Program Anything: Core Rulebook

If you appreciate this article you might consider supporting my Patreon.

But if a monthly commitment is a bit much, I get it, you might consider buying me a coffee.

photo credit: ” fragments ‘pictosophiques ” 2/3 – To-Be & Not-To-Be are in a boat. Whe(re)n’s the boat the question ? Whe(re)n’s the Styx Memory ? via photopin (license)

Liked it? Take a second to support kadar on Patreon!

kadar

I'm just a wunk, trying to enjoy life. I am a cofounder of http//originalpursuitssoc.com/ and I like computers, code, creativity, and friends.

You may also like...

Leave a Reply

%d bloggers like this: