I can’t solve this one, and I think I need the help of the DEV Community. So, a developer was responding to a code review comment I made and they simply asked me, “why would I do that?” I gave my standard, dusty answer: “because you have to code defensively— you don’t know what the future holds.” But I suddenly realized… am I proliferating a fear of the future? How could I code fearfully when I run CubicleBuddha.com where I blog so often about living happily in the present? I’ll share the specific code example with you. I’m hoping to hear from the community whether my solution is “coding in the moment” or if I am actually bowing down to the fear.
A classic defensive programming example
Part of the duty of reviewing a coworkers code is to try and see what they might have missed. This follows the standard definition of defensive programming:
Defensive programming is when a programmer anticipates problems and writes code to deal with them. (1)
So, imagine you were reviewing a pull request and the code was making some assumptions. At first glance, the code sample below looks innocuous. And maybe it is. But having spent decades fixing other people’s production bugs, my spider-sense was tingly with fear. A specific bug comes to mind (which I’ll be demonstrating it in the second coding sample below) which leaves me staring at the Github code review not knowing how to proceed. I’m trapped wondering if I should keep quiet to preserve a carefree relationship with my peer or if I should speak up and prevent the potential production bug. Am I being haunted by the early years of my career where I was relegated to only bug fixing? Or were my formative years an invaluable training ground that makes me who I am today?
“If you are caught in sorrow and regret about the past, or if you are anxious about what will happen to you in the future, then you are not really free to enjoy the many wonders of life that are available in the here and now.”
~ Thich Nhat Hanh
See for yourself if you can find where a bug can easily manifest. If you can’t see the bug, then I’m almost jealous that your past didn’t inform you of the potential nightmare. There is a bliss in not knowing. But sadly, users who experience production bugs don't care about your "bliss," they just want to finish what they were doing:
Okay, yea. No problems “in the present.” And one could argue (as my peer continues to do so) that since our program is only used in geographical regions that are limited to the three major traffic signals (red, yellow, and green) that we don’t have to worry about this right now. My peer is using one of my favorite phrases against me: “You Ain’t Gonna Need It” (YAGNI). And I get it. But do we really not care about expanding the software?
And this is the biggest internal conflict I struggle with between my coding style and my philosophical beliefs. Why build software if you don’t want it to be used by an expanding group of people? There’s no shame in hobbyist programming. But if you’re a professional programmer, you’re doing it to make money and/or to improve the lives of your customers.
So, can we be pragmatic? Can we try to be a buddha in a setting so sterile as a cubicle? Can we have one foot in commerce with another foot in calmness? The coding technique below will (in my opinion) help you to make way for the future while calmly focusing on the present.
Seeing the car crash of the future… and remaining calm
So consider the fact that when you get new users, you should hopefully be learning about the needs of your new customers. And new use cases means new features to write. And here’s the classic example. Today, we only deal with 3 lights. But what if we start selling the software in other states? For instance, the state that I live in has a blinking red light where you’re required to stop first before you go (kind of like a stop sign). Let’s see if the code that worked before has protected us from the future– can you spot the calamity that would occur?
Hold on a second, if the driver saw a red blinking light… wouldn’t that fall into the fall-through/else case? Wouldn’t they… oh no! Kaboom!!! Let’s see if we can prevent that future car crash but without having to do too much more work in the present.
Defending the future: the “never” type comes to the rescue!
Thankfully TypeScript has a language feature called the “never” type that allows the compiler to identify when every case in a union of types (or every case of an enum) has not been accounted for. As you can see below, by not allowing the series of if-elses to fall through to a default else, the compiler will tell us that we forgot to instruct the driver how to respond to the “red blinking light.”
And now the driver won’t get into a car crash when we decide to start handling blinking red lights… because we literally couldn’t compile the code until we instructed the driver how to respond to this new case. In the original example, the code would have told the driver to just “go.” That doesn’t seem mindful to me.
The beauty of this defensive programming technique is that it costs almost no time to add exhaustive type checking to your code. The experienced programmer part of my brain knows that coding defensively is the simplest and best way to look out for the user’s needs. But, I worry sometimes that my career prevents me from truly acting like a Buddhist. Hopefully techniques like this “assert never” approach will allow me to strike a balance. After all, I’m just human. And Buddhism teaches us to love our humanity and to accept our emotions.
But what do you think? I’d love to hear your thoughts on Twitter and Dev.to about your thoughts on the healthiness of defensive programming. Do you think it’s worrying too much about the future? Should you only concentrate on what the software needs to do today? Or do you think it’s okay to code defensively?
Top comments (125)
Never really called this "defensive programming" – or fear-based programming. A good programmer/engineer/architect/etc. is forward-thinking and seeks to author solutions that neither paint themselves nor their successors nor their customer into a corner. If the incremental cost of adding anticipatory code-paths is low and otherwise obvious, it makes little sense to take a shortcut now that will bone you, your successors or your customers in the future (see "technical debt").
Totally agree with this statement except for when you code for scenarios that might never exist. We had a severe problem of simple tasks taking days longer because every engineer is trying to fix every edge case. Honestly I think a more proactive approach is making your code clean and tested so it's easy to expand later. So the real gotcha here is, how much is too much "defensive programming"
Yeah. It's definitely a balance. And it's a "balance" that comes both from general experience and knowledge of your particular customers' or target userbase's proclivities.
One thing that helps with tempering a tendency towards sacrificing adequate delivery-speed on the altar of future-proofing is realizing that "future-proofing" is often an stand-in for "idiot-proofing" ...and that there will always be a better idiot out there. =)
Yes indeed. It’s all about knowing the right balance between future proofing and what is needed today by the user. Just out of curiosity, did you find this “exhaustiveness checking” approach to be within the balance? I.e. would you use it every time you write a switch case statement or an if statement that checks all cases? Because I use it every time but I am curious what others think.
I think we need to do our due diligence but if you're working on an app and you know the user base will be quite small (internal software for companies or marketing sites, even niche software) then a lot of the scenarios engineers dream up just won't happen and if they do then you'll have the extra cash to address them. Just make your code clean, so the next engineer hates you a little less 😉
I don’t see new cases as being something “engineers dream up.” Happens every day. That’s why the never type says “with what I know now, the code will never experience a different case at runtime.” Seems like an easy thing to apply every time regardless of the size of the user base. Because the user base will grow. And when the user base grows, so will the requested functionality.
I definitely do, but without solid samples it's just us bike shedding 😂
I would say it depends what you are programming and who for. If you're writing software for something where failure will cost someone's life (eg air traffic control, railway switchers, radiation therapy treatment for cancer), I would argue be as defensive and thorough as possible to avoid unexpected conditions.
That’s great to hear. I also feel that way. What have you done to persuade the less-experienced members of your team that the incremental cost is low?
I find that I can explain it once. I can remind them again in a code review. But if they ignore it the third time... that means that the developer really doesn’t want to code like (as you put it) “a good programmer.” That’s always sad for me to watch when a developer doesn’t want to take the extra step to help the customer from surprise bugs, but part of my growth as a senior dev has been learning to “advise” not “force” good behavior. I must admit that I’m finding it challenging. I’ll take any and all advice! :)
Usually, it's a dialogue that usually depends on the personalities involved. My approach is generally to frame things as questions. Usually something along the lines of "that's a good approach to the direct problem, but how would you extend this to meet ". The other thing that helps is to remember that showing usually helps more than just telling.
Ultimately, whether you're advising or explicitly-mandating a change, you're exercising a degree of force (inasmuch as you're causing someone to do something they wouldn't have otherwise done). The degree of force appropriate to a given situation will depend on the person you're interacting and the importance an visibility of the deliverable.
The other thing to bear in mind when evaluating the force applied is that, at the end of the day, when you sign off on a PR, your name is now on that code, too. Whoever looks at that commit history can rightly interpret that you were ok with the state of things. Generally, I'm all for letting people do things how they see fit. However, I have to feel comfortable lending my reputation and my employer's reputation on a given chunk of code.
Ooh I’m definitely gonna use this line of yours:
Thank you so much for your response. I think choosing my battles is the hardest part of being a lead/senior dev. It’s great to get feedback from others. :)
I've had devs in code reviews that are grateful for the advice, but others that thank you for the input but know better. Sometimes if the impact is minimal and the bug reversible I've let code to go through that isn't great so that the person learns from it. I wouldn't recommend that for every problem though!!!
Yea absolutely As you pointed out, it’s really tough the draw the line between code that needs to be corrected and which one can slide. But I think that the fact that devs like us try to think about he distinction at all makes us better servant leaders. So congrats on being a thoughtful dev! :)
Thanks and congrats to you too!
Defensive programming is a must if you wish to obtain high-quality software. The main thing it does it stop bugs from propagating far through the code.
Your example is a complex one, because it depends a lot on the language. There are numerous simpler examples that can demonstrate the value of defensive coding. That is, just because there are situations where it may not be warranted, doesn't mean it isn't warranted as a whole.
Checking enums via if's is generally the wrong thing to do. If you intended to cover all cases than some kind of switch is expected -- which then many languages will catch new conditions being added which aren't covered. If the language does not support a switch statement, then having a catch-all final case is the only sane option. This is because a series of if-else conveys a different meaning than a switch statement, but you actually intended to have a switch statement.
A lot of people avoid switch statements since it’s too hard to enforce good behavior like avoiding fall through. But I get your point.
Not all languages are broken like classic C. Even C/C++ compilers can provide warnings on fall-through and missing case's now. Using those languages without warnings turned on, and heeding all warnings, is crazy.
C# requires full coverage I believe.
Two points of correction/clarification:
1) the solution I provided in the article works for switch statements too
2) the solution I provided gives you feedback at compile time. C# would only help you out at runtime. See here: stackoverflow.com/a/20759116/706768
Oh, I didn't mean to imply there's something wrong with your approach, only that there are simpler cases to convince people of the need for defensive programming. Your solution is fine for the languages you're using.
Hmm, I wonder what language it was, if not C#? I recall one of them would produce an error (and of course C++ with the warning enabled).
Point taken. As for your question, I’m not sure which language besides TS supports checking exhaustiveness. Btw, I had copied the gist link incorrectly in the solution part of the article. You can now see the use of the never type. Woopsie!
@edA-qa, in the .Net world are you thinking of F# maybe?
I'm probably a minority, but I really hate YAGNI. It is based on the assumption that the cost of doing things when the need arises is the same as doing them now. In practice, this assumption tends to be false, because by the time you'll actually need to change that code, two things almost always accumulate:
These two factors make is significantly harder to implement the feature when you need it. You are not trading X hours of work now for X hours of work later with a probability of not needing it - you are trading X hours of work now for αX hours of work later, and suddenly it's no longer a matter of best practice dogmas - it's a matter of risk management and the estimated probability of not needing it matters.
Even if after considering everything you decide not to do it now, you should at least put some minimal effort to make it easier to do in the future - like creating stubs and writing more comments explaining things you that may be hard to recall later.
Good point. I also try to call out YAGNI whenever possible. But do you think that avoiding fallthrough in a switch case (as shown in the article) is YAGNI? I’m just curious.
YAGNI is flexible enough to support both. You can say it's YAGNI because supporting more than the current 3 states for a traffic light is not a feature you currently need, or you can say it isn't YAGNI because ensuring the code does not break when you change it is a feature you need.
If you treat YAGNI as a holy dogma then it make sense - as with all sacred scriptures - to force the definitions to match your opinion. I prefer to treat YAGNI - or any other best practice - not as a rule that cannot be broken (but can be twisted) but as a rule of thumb. Not "you shall not do this!" followed by a thunder noise but "you should take this into consideration.
So, in this case, YAGNI applies - you should consider the probability that you won't need to support more options. But that's just one factor you need to consider - you should also consider the probability you will need it (1 - ρ), the cost of doing it now, the cost of doing it later, and the potential bugs of both cases. And with everything considered - it's pretty clear that avoiding the fallthrough is simply not worth it.
Thank you for responding. :) I’m a bit confused by this point though:
Because it takes about 15 seconds to add any else/default case that throws an error. And if you have an assertUnreachable function around, then it takes the same time to get compiler time feedback.
I totally 100% respect your thoughts on measuring the cost— after all, modern software development and Agile is all about trade offs. But if you just make it a rule of thumb to never fall through... you’ve avoided a whole class of bugs and it only took you 15 seconds per switch statement.
As for the use of the word YAGNI, another commenter provided this wonderful quote that has helped clarify my thoughts. Martin Fowler says:
And after using the never pattern (described in my article) for a few months now, I can tell you that it makes refactoring so much easier. And to your point, maybe the refactoring is never required, but I do feel a lot happier in the present when I take the time to pay it forward to future me.
Sorry, that was probably a miscommunication. Since the regular meaning of "fallthrough" (missing
break;
statement in aswitch
clasue that causes execution to fall through from one case to another) does not apply here, I interpreted "avoiding fallthrough in a switch case (as shown in the article) is YAGNI" as the original code - the one that not modifying it "respects YAGNI".Of course the option that does not potentially cause a lethal accidents when new options are added is preferable...
A "rule of thumb" does not mean "never do this" or "always do that". What it means is "always consider this". You still need to apply your own judgment.
Best practices are treated too much like holy scriptures. A set of rules, set in stone, that everyone can quote, and whether or not they know the origin of a rule - the assume that it came from God and must never be broken. But for a law to always fit reality it has to be very elaborated, and these best practices usually try to be short and catchy proverbs. So wise sages (like Martin Fowler here) add more interpretations and clauses to make it fit real life cases.
I really disagree with this approach. Developers like Martin Fowler simply apply their own judgment to the rule for everyone to use, but I think every developer should be capable of thinking for themselves and using their own judgment. You don't need to find some sage to quote to support your judgment - you can provide your own reasoning. Even if you don't have your biography and achievements listed in Wikipedia.
Of course, if your favorite sage published an article or wrote a blog post with well-built arguments there is no shame in linking it. The point is that you should rely on the logic of the arguments - be them your own or from external sources - and not the holy wisdom of the arguer.
Yup absolutely. I only quoted him because j thought he expressed a nice sentiment succinctly and I think it’s important to cite people for their contributions. As for my own thoughts: I have yet to find a good, safe reason to have a default case that handles more than one state (I thought that was called the fallthrough case but my bad). I think (without anyone else telling me) that it’s better to throw an error (or better yet, create a compiler error like I show in the article) when a not-yet-discovered case is found in the default.
I was hoping that someone would provide a reason to avoid the “rule of thumb.” I like discovering when ideas are not absolutes. The fun is in the gray area. But until someone presents a compelling reason for a non-never default case, I’ll continue to make it a correction on code reviews that are submitted to me. Defense it is.
There is one case I can think of where you want a
default
clause that does not throw an error - handling keycodes:(in no particular language)
Adding ~100 more
case
clauses for all the other key codes is too much, and you wouldn't want this to fail compilation just because someone updatedKeyCodes
to support some more keys.Whadya know, an exception to the rule. Bravo! 👏
But yea, I think exhaustively checking every case in the KeyCode enum would be a waste of time and would be way too verbose. I gotta be honest, I wasn’t expecting someone to come up with something that made me think it was wise to avoid the never assertion, but you did. :) I guess that’s he beautify of seeking feedback.
That's why my best practice is to never blindly follow best practices to the letter and always apply your own judgment.
I can't see the point in the example you provided since a simple dictionary is gonna do the whole if's thing:
And voila, defensive coding achieved!
I’m not sure I see how a dictionary would solve this. Could you provide a code example so I could see and understand?
Sure, I'm using Python generally but this could apply to any lang:
As a general rule of thumb, whenever I see many if's, I just think directly: "Can I use a dictionary here instead?"
The problem with that dictionary solution is that it relies on throwing an error at runtime. There’s no information provided to the compiler to catch it sooner. Which means that you’ll only find out if you forgot a case (or a key the dictionary) if you write a unit test that tests for exhaustiveness. But rather than write a unit tests, wouldn’t it be better to find out at compilation that a mistake was made?
So I would recommend that you give the article another read and then try out the samples I provided in the article in a REPL. It’s probably not apparent the advantages without seeing how quickly the error pops up in a TypeScript playground. You’ll find that they allow you to discover bugs much faster (since you don’t have to run the code).
Ah, now I get your point...
You're trying to get things checked at compile time, but I'm not sure if the compiler is really suited to check the app logic (the compilers' job is to check syntax and such).
Not really, I trust my tests more than the compiler, cuz the app logic might break one day (one way or another), and the beauty of tests is to get that logic checked everytime you go into the building stage.
@yaser
I completely disagree here, ideally the compiler would check all your app logic too, the more you can get checked by the compiler, the less tests you need to maintain. Languages like Haskell are popular just because the compiler can help you a lot.
Tests can never show the absence of bugs, only the presence. Having a type system cut the possible inputs down to (in this case) a finite amount of values is far more valuable than testing the 4 values you mention in your unit tests.
@Cubicle
Not with TypeScript, the following code will throw a compiler error if you change the
TrafficLight
type, without adding something to the object:Great work Yan. Yes, your type would require each key to be present. It’s another thing I love about TypeScript. There are so many ways to express concepts. :)
I agree with you in this TypeScript scenario.
But, as for Python (or any similar lang), I'm not sure this would be the case since this even goes against the Python moto: "let the exceptions fly and catch them later".
So, defensive programming model might be different from a lang to another.
Interesting. I tried to find an article about Python and “letting the exceptions fly” but I couldn’t find anything.
One should always choose the best tool for the job. Sometimes that might be throwing/catching an error, and other times it might mean preventing it with the type system. Why limit yourself to one tool?
“When all you have is a hammer, every problem starts to look like a nail.”
I think the last time I heard it about was in a video or so, but the correct idiom is "Easier to ask for forgiveness than permission"
This video explains it in a nice way: youtube.com/watch?v=x3v9zMX1s4s
And this article summarizes things: devblogs.microsoft.com/python/idio...
I can totally relate after I saw how TypeScript goes (I never used it before, just the old normal JS).
I have to agree that if there's ANY chance that a stray value is going to get into your function you should throw an exception (if your type system forbids it then you wouldn't need it.. but....).
Now about your specific example. If/else ifs/.. are a bit of a code smell in the OO era. That is why people propose the dictionary solution as a table based solution to the problem. Or one could use an OO based solution where each type is an object that responds with a method that indicates the desired behavior.
Doing it that way the OO hierarchy (or interface) would force you into doing it right every time.
But barring the specific example, defensiveness is essential. Especially in the new security conscious word.
If someone throws YAGNI at you, throw the 5 C's of programming:
Clear
Concise
Correct
Complete
and the most important these days: C-secure
You mention that if statements are a code smell... but what does it smell of?
Like the code in the article does it’s job and it communicates it’s intention clearly.
They are an OO code smell.. In OO things should be solved by dynamic dispatch and not by successive ifs since all the rules about a type should be in a class, rather than dispersed all over the program.
A second best option is to use tables (as others suggested) but that's just a fancy if/elseif/...
From a purist OO perspective the solution is to make methods in each type that would solve your problem.
So instead of:
if (type == 'red') {
doSomething1();
else if (type == 'blue') {
doSomething2();
} else if ...
You can just write:
type.doSomething()
Where doSomething() is overloaded for each type.
Now you may use a single if/else if/.. statement or table in one place to convert your string into an object (a factory method) but thats in only one single place in the program.
If you can’t tell me why it smells (as in you can’t tell me what type of bug will occur based off of what you’re seeing) then you might be blindly following doctrine.
And as far as “the OO era” that you mentioned, many of us are moving to more functional concepts like splitting data from logic. I do that not because I want to follow the functional programming doctrine but because I found code to be more testable that way. I’ve also found that I was able to utilize composition much easier when I started to throw away the idea of encapsulation.
Consider checking out this incredible article: medium.com/@cscalfani/goodbye-obje...
The specific bug is that by spreading your type logic all over the program, if you need to update it, you need to find all those if statements to update them. So your code is more error prone and less maintainable. Its also a violation of the DRY principle. Specifically if you want to add a new type, and forget to update one of your many if statements you'll have a bug. Or if you want to change the behavior of a type, and forget to update it in one of the many if statements.
I did mention "OO era" in case because I new the functional style would be mentioned. Note however that with a functional style, you shouldn't throw away encapsulation necessarily. Encapsulation is modeled in your code module. You could do the exact same thing in a functional style without proliferating your code with if statements.
The functional equivalent for this is multimethods. (though I'm not sure the language you are using supports that construct)
See clojure.org/reference/multimethods for example
You might be missing the point when you say this.Based off what you said, I feel that I might not have explained myself well:I’d like to clarify that the assertNever function tells me if a new type was added and it tells me if one was removed.
So I don’t need to “find those if statements” because the compiler will inform me.
That is a work around.
But no, the compiler won't tell you. An exception will tell you at runtime. Assuming you tested correctly you may find this before it hits production. But its hard to argue that this is better than just using better coding practices.
if/elseif/... is a code smell. A bad practice if it can be avoided. In this case it can be avoided.
Try compiling this code. The compiler will in fact tell you if you’re missing a case that’s described in the discriminated union.
Of course, that only works if you recompile all your libraries and dependencies. The OO solution would work even if you only recompiled the class.
If you want to write "defensively", you have to ask yourselves: what is the target, I want to defend against. The problem with defensive code is, that relative what your targets are, workload increases.
Regarding your traffic light example, I would argue, that there is no need to implement the "never"-strategy. Simply write tests - which you would do anyways - and before it goes to production, the according test would fail, you would detect the booboo and fix it. No harm was done.
But I agree, it is hard to assess the risk for errors correctly.
Hi Thomas. While I appreciate your response, I needed to share that at the time you commented, I had a copy paste error and wasn’t showing the actual solution in the final part of the article. So if you get the chance, you can see how the never type is used and how it helps me to not have to write a test at all.
I think it’s much nicer to not have to write a test for this kind of thing.
That being said, I’m thankful that there are people out there like yourself who want to write automated tests at all. Me too! Let’s keep spreading the test-writing love. :)
Happens ;)
To put it in another way:
Write tests to observe changing behaviour of your application.
If your code has to change due to an error, write a test covering that changing behaviour.
Do not try to cover each and every cornercase.
For me there is a distinction to be made when it comes to defensive programming. On the type level you should be as defensive as possible. Your types should result in compiler errors if you do not handle new cases in the future. Make your types as small as possible, just like you did in your article (use a sum of three distinct values instead of simply a string).
On value level on the other hand, I really dislike defensive programming, usually because it means that your compiler's type system is not strong enough to check these at compiler time. The prime example here is the classic
null
. TypeScript has non-nullable types, but for example Java programmers have to putif(x != null)
everywhere, just in case.I would write this code snippet like this:
The value level is short and concise, it only contains what I want to do (ie mapping a signal to some action) with not much boilerplate syntax. The type level code is almost the same length, guaranteeing that adding future cases will result in a compiler error.
OP: This is another great solution ^
Interesting article ( and very interesting blog btw :-) )
I don't really like the word defensive programming because over the years I see too many bloated methods (even in typed languages like C# and Actionscript) full of null checks, undefined checks for basically everything. and every time I asked " but why... ??" the response was always " Just to be sure " or " better be safe than sorry" or " do you really wanna risk a crash on production !?!?"(the last being pronounced with a horrified face showing disappointment for my being so irresponsible).
But I agree that it is better to make the best out of our coding style and out of the tools we have ( strictly type languages, linters, unit tests) to prevent errors that might happen in the future.
The first thing that I thought when I saw your example was that i would have used a Dictionary/Map to retrieve the right action for the right signal. if there is no mapping then there would be an error. Of course, that would not work at compile time - so the solution suggested here is perfect and elegant.
But when it comes to really be defensive then I sometimes get very paranoid and think of what could happen at runtime. All your type checking works only at compile time: in fact, the elegant solution would be converted to this in simple js:
And nothing would prevent your function to be executed passing invalid values. (imagine that the signal is a value coming from the server or from any external API )
Of course with a mapping instead of an if or switch you will get "undefined" rather then "go" but you might end up with unpredictable behaviour anyway. In such case, an error thrown ( and properly handled ) would be better.
So to conclude: as always Defensive Programming: Good? Bad? It depends :-)
Thank you for your kind reply. Yes, I’ve been cooking up the blog articles for many months now. :)
So as for your code I your response, I’m a bit confused about the benefit. Your function returns undefined (as you mentioned). I don’t understand why you feel that’s a good thing. That means that a consumer code needs to either handle the undefined case every time, and if they forget to do so it will be a runtime exception. And that exception would be some kind of unclear “undefined exception.” Even in JS it would be better to throw an error so as to be explicit about the reason why it couldn’t find a match. But there’s a better way with TypeScript where you kind find out months before an error would ever be discovered. The compiler is run on every check in. Why wouldn’t you want the faster feedback of finding out when you check your code in?
Personally, I find that I go much faster if I have a short feedback loop where I can find out what I did wrong. Now, I realize that I’m arguing for static type analysis, which is not everyone’s cup of tea (although the recent Stack Overflow survey showed immense support for TypeScript). But the main point I’d like to make is that it truly is better to avoid bugs in production if you can catch them sooner.
probably I did not articulate my reply properly.
I totally agree in the benefit of the static type checking. and I definetely want to catch error at compile time.
And i said that i would implement that check not with an IF /ELSE IF nor with a SWITCH, rather with a Mapping like suggested above by @jvanbruegge :
BUT i would also be even more defensive and check for possible error at runtime in case - at runtime the function is called with an invalid value.
At runtime Typescript does not exist, after you compile all your type checking is gone and what you have is the function I posted.
See and play around with this Typescript Playground snippet for comparison
Therefore, if the invocation of the function could be dynamic ( server could respond with blue or user could type in orange ) i would add a catch and a fallback to prevent a runtime error.
Hope i was clearer now :-)
Yup yup. Yea the map + the undefined check would be the ideal approach. Sorry I didn’t understand at first. Classic misunderstanding with remote communication. My bad! :)
Oh and as far as the server sending bad or new data types, I have been using a library called TSOA to enforce runtime types at the boundaries. It’s a way of preventing “garbage in garbage out.” It’s pretty cool stuff. There are similar libraries that do runtime checking in the UI too.
No, I would not add runtime checks to this. If you call the function with something different than the types specify that's on your own. The caller has to verify that he can call the function
i think it depends on how complex it will be when you try to be "defensive". There are times that the code gets too complex to understand because we're trying to handle a lot of cases, when it is just a simple feature.
Simple features have a habit of becoming complex features as time progresses and the application grows. That’s why defensive programming provides a foundation so you can grow the program without worrying.
It takes experience to determine which simple features have the potential to be a complex one. If the code gets too complex and I'm spendin too much time, I will always revert to KISS and YAGNI and not be paranoid about future use cases because it will show up anyway and we need to do some CR/Bug/Enhancement about it. I focus more on good design so that code can be easily refactored.
If you’re waiting til someone makes a bug report, then you’re waiting too long. That’s by definition reactive. The approach in this article is proactive.
The best bug is the one that never makes it to production.
yeah, tho i still think it should be a balance, you can never be too proactive because it might be costing you too much time, when you can just design it better. If you are already good at it then that's awesome.
In your example, in order to create add new traffic signal, you don't file a bug. You need enhancement for that and you can refactor the code the way you did. But in other cases, simple code that only handles 1 case but made too complex will be a maintenance nightmare for you and your team.
I think it is valuable to not only think of defensive / anticipatory / forward thinking programming in terms of the code only, but to also think about it in terms of the domain.
In other words: when this code fails, will it result in a safe / acceptable condition in the physical domain? Disregarding all other safeguards, this code would fail with "green" (which is probably the least desirable outcome with regards to the domain), but can be easily changed to fail with "red". If there is no buy-in to change from a code perspective, at least convince them to make a change that will fail to a safe domain condition.
Interesting thought. I suppose the whole idea is that I don’t think we can ever know what the future holds. So how can we (as you say) “fail to a safe domain condition?” Because we don’t know what is safe for a case that we haven’t discovered yet. So for instance, the safest response is to stop. But if you stop at a blinking yellow, you might enrage the driver behind you who was expecting to pause. I’m sort of joking. But yes, I will meditate on what you’ve suggested. I think you might be on to something. :)
I had exactly the same thought when I saw the example, and can't stress enough how much I think this is a great approach