Imperative programming is a paradigm that explicitly tells the computer what to do and how to do it. Unlike most other approaches it is a relatively concrete view on data and execution -- there isn't much mystery or abstract runtime behavior involved. The essential aspects of imperative programming are sequenced instructions and mutable data.
Imperative programming is the cornerstone of computing. CPUs primarily work as imperative execution engines, and compilers translate into this language. As the primary way of driving computers for most of our programming history, it's well represented in the language arena.
I'm not saying all of computing works this way. A notable difference is a GPU. It's primary functioning is quite different from the sequential nature of a CPU. But this article is about the imperative world. A GPU follows more of a functional paradigm.
Why?
Breaking down a problem into steps is something that we can all naturally do. We understand the real world in sequence and flowcharts: things have one state, something happens, then they have a new state. This maps relatively easily to imperative programming.
Computers, and networks, are primarily imperative. Many things have to happen in a certain order and memory constructed in a specific way. Imperative programming lends itself to use-cases that touch on these low-level domains.
Imperative programming is the most adaptable when it comes to implementing other paradigms. It's a starting point to creating tools and domain specific languages. It's a fallback when no other approach seems to apply.
This article focuses on the benefits and core qualities of imperative programming. As with all paradigms there are disadvantages; i'll have to look at those in a future article.
Sequence
Most prominent to imperative programming is the sequential flow of instructions and branching.
var a = 5;
var b = calculate(a);
if( b > 10 ) {
print("Too big");
return;
}
Code lines are executed one at a time from the top-to-bottom. A value of 5
is stored in the variable a
. The value of a
is then taken by the calculate
function. The result is assigned to b
, and then the if
condition checked. Regardless of how large our functions get, they can always be viewed as a set of individual discreet steps executed in order.
A very large "as-if" rule is at play here. The code is executing "as-if" following those rules I just stated. A compiler is free to find purpose in our code and apply all sorts of optimizations: short-cuts, rewriting, and reordering. CPU's care little for our need to debug code and also doing reordering, caching, and branch prediction on the code. The "as-if" rule states this is okay so long as the results are the same as executing our instructions as written.
Mutable variables
This sequencing is dependent on mutable variables. Variables represent a storage location, not just a value. Bare with me through a few trivial examples, there's a point to this.
int a;
a = 5;
The variable a
contains an integer. At first it is undefined, or has a default value, depending on the language. We assign the value 5
to it. a
now contains a copy of this value, wholly unrelated from the constant value 5
.
int b = 6;
a = b;
b = 7;
a
and b
are distinct values. When we assign b
to a
we get a copy of the value; when 7
is assigned to b
later it does not alter the value of a
. It's possible to make variables bind to the same value using pointers (or references, or whatever word your favourite language invented for them).
int b = 5;
int* a = &b;
//a == 5
*a = 6;
//b == 6
These examples may seem trivial, but they are significant aspects of this paradigm. Imperative programming has a direct concept of memory. We get a canvas in which we can read and write values at will. This requires we understand values and pointers. Many languages go so far as allowing us to reinterpret memory as arbitrary types.
Concurrent processing poses a problem to imperative programming. The general paradigm still holds, but the "as-if" rule is significantly loosened across threads. Only special instructions are guaranteed to have any ordering between parallel threads. Data is only required to be consistent if we go out of our way to make it consistent. Failing to pay tribute to the concurrent gods will invite evil spirits that wander the program and smash bits in infuriatingly hard-to-decipher patterns.
Functions
Sequential ordering and mutable data define how functions work. A function may, like it's functional programming cousin, simply take input and provide a calculated result, but in imperative programming it's unlimited in what it can do. A function may have all sorts of side effects, changing global memory values, populating a database, loading a texture for the screen, starting a phone call, etc.
What all functions have in common is being called at a specific point, and completing before the caller continues. Functions don't partially execute, nor are they executed multiple times (again, the "as-if" rule applies).
int calc(int q) {
int a = one(q);
int b = two(q);
return b;
}
It doesn't matter that a
is not used, one
has to be evaluated in case it has side-effects. It must be evaluated prior to two
as well. All the instructions inside those functions have a defined global order in this function. The imperative paradigm depends on this when communicating between components. Consider the setup and teardown of a library:
lib_handle h;
lib_init(h);
lib_call(h, 1, 2, 3);
lib_release(h);
The compiler never really knows what we're trying to do, it only knows what we tell it to do. Some would say imperative programming involves the least amount of "magic"; the computer just does what we tell it to do, whether or not it makes sense.
What about promises, or functions that dispatch events? These are offered by a lot of "imperative languages", but they are no longer part of the imperative paradigm. Languages evolve, usually for the better, allowing for new approaches and techniques to be used. This is a good reason to stop calling them "imperative languages", opting for "general purpose languages" instead.
Object oriented programming
Object oriented programming is an extension of imperative programming. It provides syntax and names to many commonly used patterns. The focus is on grouping functions and data into logical classes.
//non-OOP language
struct my_data {
int a;
};
void my_data_init( my_data * m ) {
m->a = 0;
}
void my_data_incr( my_data * m ) {
m->a++;
}
//OOP language
class my_data {
public int a;
public my_data() {
a = 0;
}
public incr() {
a++;
}
}
The syntax extensions are definitely helpful, but it doesn't represent a fundamentally different way of programming. C programmers were structuring their code this way before C++ was even available. It's perhaps a refined of the paradigm but not fundamentally different. We're still giving the computer rather explicit instructions on how to structure data and the order in which things should be executed.
Building blocks
Imperative components are linked to each via functions and variables. Programs are created by sequencing calls to many libraries and altering a global state, in memory, on disk, or over a network. We can loop on conditions, branch, or even call functions that wait for something to happen. Everything that happens is explicitly written, and ordered, in the code.
Though complete programs can be written strictly using this paradigm, it is no longer common except for the smallest of scripts. Imperative programming is well served when combined with other paradigms to create applications. Events connect a UI application, where the individual responders are written imperatively. Declarative languages describe server deployment, but the individual rules are imperative. A simulation package that has primarily functional calculations can be glued together with a bit of imperative code.
Top comments (0)