Foreword — Dear beginner, dear not-so-beginner, dear reader. This article is a lot to take in. You'll need perspective for it to make sense. Once ...
For further actions, you may consider blocking this person and/or reporting abuse
You got some things wrong about it.
switch
gets compiled toif
usually, being compiled to machine code or bytecode but still.About the assertions it refers to
assert
whatever the language is, and is also referring to Design by Contract.About the preprocessor is referring to Macros and Metaprogramming, not about Transpilers.
I prefer a 100x100 maximum, 100 line length per 100 lines per function.
Those are interesting points, to reply in order:
switch
es are compiled is not-so-relevant to this point, because it is more about human perception of code flow than technical reasons. But I've stated it in other comments, theswitch
ban is really not my strongest take.if
that can be used to exit the flow and return an error code, which is then taken care of by rule 7 (emit exceptions and let them bubble up).black
did some research and found that 88 is the best. Which is fine by me, it allows to sit two files side by side on a laptop screen at a readable font size. But honestly, as long as the numbers stay consistent project-wise, knock yourself out.IMO, the ability to display two bits of code side by side is by far the most useful outcome of restricting line lengths.
60x80 is enough for a class.
I respect Remy wishes to avoid the switch compilation, there are typically 2 outputs from the compiler, depending on the code and optimization settings:
If else as mentioned, this is typically used when there are few options in the switch statement or there is a large Delta between the switch enumerals.
Jump table if there are a large number of cases and they are mostly sequential the assembler can be a jump table where you have a start and the enumeral is an offset, this is more performance efficient, but not always space efficient and this is where your optimization settings come in.
There are edge cases which result in some quirky behaviour, but these are typically compiler specific
These rules don't make sense for all types of software. NASA is working with embedded systems that are limited and don't use GCs. Also, I believe they aren't OO.
An anti-pattern in GC languages is to hold on to memory for a long time. If you aren't allocating and (implicitly) freeing objects rapidly in modern OO languages, then you're holding on to many objects for a long time. And that will cause the GC to work much harder in some cases. GCs today are optimized to get rid of short-lived objects, and perform poorly when too much data is escalated to become long lived.
Of course, the principle that you should limit your code to use O(1) RAM when possible is a good one. Because if you don't you'll get out of memory errors. Of course, the impact of that showing up in a web app is not the same as your robot crashing right before it lands on Mars (thus causing a physical crash and a loss of 1 billion dollars and 20 years of work).
My point is software is different. And rules need to reflect those differences. NASA for example is not known to be very good with staying within timelines and budgets. For commercial software that is probably much more important that avoiding an improbably crash now and then.
These rules are taken from MISRA, which is typically used for safety critical embedded software.
Yes they aren't always applicable for all software and languages, but they are good things to consider.
All I'm saying, in essence, is that you need to be accountable for the resources that your program uses. Otherwise you risk blowing things. More than once I've seen a dev app blowing up when reaching production volumes and I'm certainly not talking about Google-scale.
So the advice is more a O(whatever(n)) complexity in RAM and time but with a bound on
n
and thus a bound onwhatever(n)
.If you work on data, work by pages. If you work on a stream, work on N items at once. If you do big data, do Map/Reduce.
Also NASA was notoriously on time to put a man on the Moon, so I'm guessing that their methodology to push the boundaries of science and Humankind won't be a hindrance for more conventional tasks. At least in my case, these rules help me on a daily basis to deliver quality code on time.
But yeah first time I read that I was like "bollocks nobody needs that". I guess denial is a first step :)
NASA also had a huge budget relative to the time period when they were putting a man on the moon. Something that most conventional tasks do not have the luxury of having.
Funnily enough there was no budget allocated to software on the Apollo program.
But you need the budget to prove everything, not to apply things in best effort mode. In my case, applying those rules saves time and not the opposite.
In those days software was an afterthought for bean counters. Back then it was just rolled into "Development & Operations." And with $28.7 billion in inflation adjusted dollars for that line item, let's just say they had enough money to get it right. Which is rare for software projects today.
Let me first say that I wrote a long comment that was subsequently eaten by my browser's interaction with this website and its lack of maintain a cookie-based copy of comments in progress. And that comment started out by saying that your article was great and that it had a lot of really good advice. Unfortunately I was weary of typing it in again so sadly it was lost to the ether.
But your article also had a few points of opinion, the nature of which is impossible to prove is time saving in its application. To assert otherwise would just be hubristic and would illustrate nothing more than confirmation bias. #fwiw
Wouldn't overdo it. As others pointed out, those guidelines are tactical-level rules for working with a GC-free, memory-unsafe language in context of embedded realtime systems. This environment requires you to have total control over what the code does, how much time it spends on it, and how much memory it uses. Many of the points are making a trade-off for that control, against increased code complexity.
Some notes:
Point 1b - avoiding recursion. I'd chill out with this a bit; recursion ain't scary once you familiarize itself with it. It's rarely the best thing to do (unless you work with trees a lot), but for some problems, a recursive solution is the cleanest, most readable one. If your language supports Tail Call Optimization, it may even (in some cases) be as fast as iteration and not pose a risk of stack overflow.
Point 3 - avoiding heap allocations - is very much applicable to dynamic languages for performance reasons. Dynamic allocation costs performance (not as much as in C/C++ if your language runtime optimizes memory management for allocation, but still).
You can learn writing so-called "non-consing", i.e. not constructing, not allocating code, but it's tricky - such code may become less readable than the "regular" alternative, and definitely needs to be isolated because it can corrupt your data if references leak. The trick is to learn your language and standard library (and libraries you use), and pay attention to operations that modify its arguments, instead of returning a new value. They're sometimes called "destructive operations".
Consider e.g. following REPL session in Common Lisp:
The results are seemingly the same; the difference is that
remove-if
returns a fresh list, whiledelete-if
modifies its list argument. In non-performance-critical places, you should prefer to use theremove-if
variant, because it's safer, and GC can get rid of unused data just fine.delete-if
is faster, because it only rebinds pointers in a list to drop some nodes, but if that(list 1 2 3 4 5 6 7)
was passed in via reference, and someone else would hold that reference, that someone would discover the data changed under them.To write non-allocating code, you should also use arrays more - they can be allocated once to required size and reused without reallocation. It's especially handy if your language supports unboxed primitives - for things like numbers, you can then rewrite your critical code to perform operations "in place", using preallocated arrays as temporary buffers for calculations.
Point 9b is subtly uncovered (avoid function pointers). In dynamic languages, it would translate to "avoid using functions as values" - avoid higher-order functions, avoid lambda expressions, etc. Applying it would be a huge loss to code clarity at no real win in context of web development (or really any non-embedded development).
Many of these strike me as only relevant if you're passing off code to someone who is either very junior, or not familiar with the language the code is written in.
The switch argument being the best example of this.
Really isn't any more readable than
To anybody with a modicum of experience, or half a brain :D
As for the recursion argument, one could easily limit recursion depth by keeping track of how deeply you've recursed, and stopping when you hit the depth limit. Sure iteration might seem easier, until you run across a nested tree you need to parse that has varying levels of depth on each branch, and you want to write efficient code to parse it. Recursion definitely has it's place -- as do virtually every other thing you're arguing against.
If you need half a brain to understand and check a
switch
statement that's already half a brain you're not spending on other things.It's like these boxes full of things that "might be useful later" that some people keep in their garage, in the hope that one day this piece of handheld barcode scanner will yield any utility. But as you might know the day rarely comes and in the meantime it's holding a LOT of space.
So regarding the switch, two things:
if
/else if
/else
, in which case theswitch
syntax itself is simply useless because totally identical to the other onegoto
Basically, either it's bad either it's useless. So that's not something I want to bother about.
That's a hell of a twist of my words. Never did I say it takes half your brain to understand, but that someone with half a brain CAN understand..
Let me put it another way since you clearly didn't understand. Switch statements are stupid simple to grasp, unless the person looking at it is a complete idiot -- or, as I also said, completely unfamiliar with the syntax of the language, in which case they oughtn't be poking around the code in the first place.
Obviously we're not going to agree, and that's fine, I don't have to work with you, so your desire to eliminate perfectly useful easily understood elements from code doesn't affect me :)
I agree, except on the being cruel part.
Switches are as readable as ifs if not more.
Recursion has its very needed place.
Always good to avoid unnecessary complexity, but I don't agree with arbitrarily avoiding recursion.
There are recusions whose depths are easily calculated, and whose code is easily reasoned about. Remember, the speed of a solution is often better governed/adjusted at the higher level. (millisecond shaving by using iteration instead of recursion vs saving seconds by changing algorithm and/or data structure).
This not what Dijkstra was talking about. Dijkstra wrote that essay before we had structured programming. Where you did not call a function or method. But you had to jump to places. Make sure you prepared the stack correctly, and hope the target location did not change.
None of the "modern" languages support these types of jumps. The continue or break to a specific label is not the same, as it is still strictly scoped.
I'm not sure if you've already used
goto
but the few times where I thought that Dijkstra was wrong it felt like having main brains smashed in by a slice of lemon wrapped round a large gold brick. Whichever the language.I have programmed in Basic. I have written code with line numbers. I have messed up that code.
Jumping to the wrong line was so easy, and so difficult to figure out.
Those were simpler times 😢
The fact that a feature exists in a language doesn't mean it's a feature worth using. These languages are designed by mortals and often inherit patterns and ideas from previous (and flawed) languages.
JavaScript, while beloved, was originally designed in a hurry and is famous for having "bad parts" which experienced developers not only avoid using but purposefully don't teach to newbies.
You say we should use these features for what they are best for. I agree that we should certainly not abuse any language feature, but the point here is that there are almost always better ways to solve the problems these features address. By "better" I mean "easier for you and others to understand next year" and "offering fewer places for bugs to hide."
Excellent questions, the article was already too long to dive into this.
Regarding the
switch
issue, it's because the syntax is like this:Were it like this
I wouldn't mind, but unfortunately it's not so the safest option is still a series of
if
/elseif
/else
.Regarding your point on references, basically what they do is not just pass an object (which is a shared point of anchorage and can be used to retrieve subsequent data) but also make in sort that if you change the value of that object in the called function then the change bubbles up to the calling function (see the example in the article). Fortunately this feature just does not exist in Javascript so you don't have to worry about it :)
Based on your dislike of
switch
es because they do not have intermediate ending braces, my guess is you really hate Python? #justaskingOh no I love Python, actually it makes these rules easier to apply and also it does not have switch.
But it's about the confusing control flow rather than the syntax (which I really don't care about). Forget a break and you're toast. Stack cases and you're confusing anybody reading your code.
Ironically just a few weeks ago I read an article by Dave Cheney — the bard of Go — entitled "Clear is better than clever" that advocates for using
switch
instead ofif
/else
.His article does mention that the
switch
in Go does not fall-through by default and your objection to theswitch
— which falls-through by default (a language decision I lament) — seems to be solely based on the potential to omit abreak
. But his article does give other reasons forswitch
to be superior toif
/else
none of which are mentioned in your post.Personally, I think a belief that in-all-cases either
if
orswitch
is superior to the other is simply allowing oneself to overindulge in one's own unsubstantiated opinion.(Notice I left off
else
; I concur with @DoMiNeLa10 and agree with the advice of Mat Ryer: Avoidelse
, when you can.)Earlier this week I refactored some
if
/else
code to be aswitch
statement in PHP. I did so thoughtfully and with the sole goal of adding clarity to the code someone else wrote who is no longer involved. The code was used to take action based on number of path segments in a URL. It had oneif
and three (3)else if
s where each condition testedcount($parts)
for equality with1
,2
,3
or4
.I needed to add logic for
5
so I changed fromif
/else if
/else
to a using aswitch
so the code would not need to callcount($parts)
five (5) times (nor need to set a temp var) and so that the logic was clearly spelled out that it was working on a use-case that had 1 of 5 options.Further, when formatted the code was much more readable, the
break
s were obvious, and the nature of the code meant it was unlikely someone would ever add both a6
and a7
and forget to include abreak
on the6
.My point with this anecdote is that almost all coding "rules" really should be applied with an analysis of the use-case at hand and not be considered as a pedantic absolute. Clinging to dogma really does not benefit anyone.
Unless of course we are dealing with junior programmers, and then all bets are off. :-D
Definitely, control structures are a debate that is faaar from over and also highly specific to each language. I've put here general observations but far from me to be definitive on that specific matter. It's more guiding ideas.
Regarding the variable re-assignation, I do it as little as possible. But sometimes in JS with block-scoping you need to use a
let
here and there to assign a value from inside a condition. Also there is the case of counters for loop iterations. If we were to make a rule like that it would require a lot of fiddling I think.Unfortunately not all loops can be implemented as
for
loops. For example, numerical iterative algorithms that stop when a "small gain" condition is met or the Euclidean algorithm for the GCD. The latter is an example of loop that is guaranteed to have a finite number of iterations, but said number cannot be determined in advance.I would loosen that rule to
This is clearly easy with a
for
loop, but stuff likeLoopVariant
can help you in the more general case.About the
switch
rule. This is very language dependent, not every language has the fall through behavior of C. For example, in Octave/Matlab, Ruby and Ada there is no need for abreak
. I guess that the choice of using the fall through was to avoid the need of a more complex syntax for multiple cases for the same branch (but this is just a wild guess of mine).Actually, in Ada I prefer to use
case
(equivalent to Cswitch
) whenever it is possible since the language requires that you specify a branch for every possible choice, that is, you cannot leave cases unexpressed (but you can always use the default casewhen other =>
). This is especially useful if you are "switching" using enumerative types. For example, suppose you add a new value to an enumerative type, thanks to this rule you cannot forget updating all thecase
s that need to be updated.A fun fact about the
continue
: in Ada there is nocontinue
statement. This is not due to "philosophical" reasons, but it was just overlooked in the first versions and in the successive version it was not added because it would had broken back compatibility.If you need a
continue
in Ada, just use agoto
to the end of the loop. This is considered perfectly acceptable in the Ada community which, let me tell you, is a bunch obsessed by correctness, readability and maintainability.As for me, I have no special objections to the use of
continue
. Sure, it breaks strictly structured programming introducing agoto
-like behavior (but alsobreak
does this), but its use is very limited and it can also make a code easier to read (like prematurereturn
from functions). Let's remember that the problem withgoto
is that you can write a code that jumps here and there with no definite sense of direction;continue
has not this problem.For example, the following is a construct that I use often in Ruby when I need to parse, say, configuration files
The alternative, more adherent to the structured programming rules, would be a
if
that is run only when the line is not a comment, adding an almost useless indentation level.Great article!
I showed these NASA rules to my team several years ago and we discussed how we could apply their wisdom to our development efforts. We eventually ended up with something like “avoid dangerous practices and language features.”
I’ll share your article instead of the NASA rules next time we need a refresher on these concepts.
That's quite right. Also Python makes a lot of the rules in this guide quite natural. By example a lot of people are complaining about the
switch
rule but in fact that construct does not even exist in Python :)Early
return
might not be language specific, but they are paradigm specific.While returning early can make sense in a statement-oriented language, they aren't nearly as popular in expression-oriented languages.
In at least one (Scala), an explicit
return
of any kind, but especially an earlyreturn
, is explicitly an anti-pattern.Of course, most of these languages have pattern matching, which is a great alternate to both
switch
and complexif...else
blocks.Ada has special constructs for that: you can specify pre- and post-conditions for procedures/functions,
Loop_Invariant
(andLoop_Variant
too, used to prove that the loop terminates), type invariant, and normal assertions. I usually spread my code liberally with this kind of assertions: they are wonderful bug-snipers and act as documentation too (especially pre- and post-conditions). In some cases you can also use SPARK/Ada to formally prove that some assertions are always verified.Update that to half a laptops screen so you can show diffs in meetings.
That makes a lot of sense
This is an outstanding article.
... But don't introduce arbitrary bounds. Many C code exploits were based on exceeding array bounds. Scripting languages should use their automatic memory management. I should now argue that it is better to have a slower program that gives the correct result eventually, rather than faster code giving wrong results arbitrarily. Hmm...
So many rules! It gave me headace, but I can not find any bad points to refute against them.
Don't go to silly limits with this though: why not check the ranges of x, y, and z in the one function of three assertions?
Also Coffeescript is not obscured JS, is safer and more humane to read. I'm really sad to see it's being left aside by TS
Great job! I didn't even know JPL had the Power of 10 rules having worked there.
This kinda maps to the Python good practices of
Very well spotted, I completely forgot to mention
eval()
. That's another successor of macros. To be honest I haven't done an eval in so long that I did not even consider it as a threat.You used a goto in Rule 5 of your own rules. ;-)