DEV Community

Cover image for Optimize Your Programming Decisions for the 95%, Not the 5%
Nick Janetakis
Nick Janetakis

Posted on • Edited on • Originally published at nickjanetakis.com

Optimize Your Programming Decisions for the 95%, Not the 5%

This article was originally posted on November 26th 2018 at: https://nickjanetakis.com/blog/optimize-your-programming-decisions-for-the-95-percent-not-the-5-percent


A few weeks ago I came across an interesting post title on HackerNews which was "Why I wrote 33 VSCode extensions and how I manage them".

That title really grabbed my attention, so I did what most of us do which is head straight to the comments before reading the article.

That's where I discovered this comment:

My problem with adding plugins or extending my environment much past the default is that eventually I have to deal with a co-worker's non-extended default installation. I end up relying too much on the add-ins.

Reading that really hit home for me, because that's how I used to think a long time ago.

But then I came across this comment:

I strongly dislike the reasoning that suggests you should hamstring yourself 100% of the time to accommodate a potential situation that may affect you 5% of the time.

"I don't use multiple monitors because sometimes I'm just with a laptop".

"I don't customize my shell because sometimes I have to ssh to a server"

"I don't customize my editor because sometimes I have to use a coworkers editor".

And here we are because I think this is a really underrated topic.

"What if" Conditions

Many years ago I remember avoiding to use Bash aliases because you know, what if I ssh into a server, it might not have those aliases and then I'm done for!

I was optimizing my development environment for the 5% and all it did was set things up to be a constant struggle.

The crazy thing is, back then it made a lot of sense in my mind. It's very easy to talk yourself into agreeing with some of the quotes listed above and many more.

But optimizing for the 5% is an example of optimizing for the "what if" scenario.

You do everything in your power to make sure what you're doing is generic enough to work everywhere, but what you're really doing is making things harder for yourself in the 95% case, but the 95% is what matters most.

It Affects the Code You Write

This isn't related to only development environment decisions either. This is something that affects the code you write.

If you try to write something to be fully generic from the beginning because "what if I make another application and it needs to register users?" then you typically make your initial implementation a lot worse.

Without having a deep understanding of what you're developing and have put in the time to come up with good abstractions based on real experience, you're just shooting in the dark hoping your generic user system works for all cases when you haven't even programmed it yet for 1 use case. How is that even possible to do?

It Affects How You Architect Your Applications

When you blindly follow what Google and other massive companies are doing, you're optimizing for the 5% in a slightly different way.

Instead of just getting your app up and running and seeing how it goes, you try to make decisions so that your application can be developed by 100 different teams sprawling across 5,000 developers.

Meanwhile it's just you developing the app by yourself in nearly all cases for new projects.

It Affects How You Deploy Your Applications

When you try to optimize your deployment strategy to handle a billion requests a second from day 1, you're just setting yourself up for an endless loop of theory based research.

It often includes spending months looking at things like how to set up a mysterious and perfect auto-healing, auto-scaling, multi-datacenter Kubernetes cluster, but it leads to no where because these solutions aren't generic enough to work for all cases without a lot of app-specific details.

Do you ever wonder why Google, Netflix, GitHub, etc. only give bits and pieces of information about their deployment infrastructure? It's because it optimizes their chances of having better tools for them specifically.

What better way to get people interested and working on their open source projects than to make these tools look as attractive as possible and then back it up with "we're using this to serve 20 billion page views a month so you know it works!".

It's a compelling story for sure, but it's never as simple as just plugging in 1 tool like Kubernetes and having a perfect cluster that works in a way that you envision it all should work in your head when it comes to your app.

It's easy to look at a demo based on a toy example and see it work but all that does is make the tool look like paradise from the outside. It's not the full story.

As soon as you start trying to make it work for a real application, or more specifically, your application, it all falls apart until you spend the time and really learn what it takes to scale an application (which is more than just picking tools).

The companies that created these tools have put in the time over the years and have that knowledge, but that knowledge is specific to their application.

They might leverage specific tools that make the process easier and tools like Kubernetes absolutely have value, but the tools aren't the full story.

What if putting your app on a single $40 / month DigitalOcean server allowed for you to have zero downtime deploys and handle 2 million page views a month with tens of thousands of people using your app, without breaking a sweat -- all without Kubernetes or trying to flip your entire app architecture upside down to use "Serverless" technologies?

I Used to Do All of the Above Too

I've been saying "you" a lot in this post but I'm not targeting you specifically or talking down to the programming community as a whole.

I've done similar things to everything that was written above but with different tools and different decisions because technologies have changed over time.

I can distinctly remember when all of this switched in my head too. It was when Node first came out about 8'ish years ago.

I remember being fairly happy using PHP, writing apps, shipping apps, freelancing, etc.. But then I watched Ryan Dhal's talk on Node (he created Node) and I started to drink the kool-aid for about 6 months straight.

Thoughts like "Holy shit, event loops!", "OMG web scale!", "1 language for the back-end and front-end? Shut up and take my money." were now buzzing through my head around the clock.

So all I did was read about Node and barely wrote any code, until eventually I started writing code and while I learned quite a bit about programming patterns and generally improved as a developer, I realized none of the Node bits mattered.

And that's mainly because back-end and front-end development is always going to have a context switch, even if you use the same language for both, and lots of languages have solutions for helping with concurrency.

Those 6ish months were some of the most unproductive and unhappy days of my entire life. Not because Javascript sucks that hard, but because when you're on the outside and not doing anything and wondering "what if", it really takes a toll on you.

I'm still thankful I went through that phase because it really opened my eyes and drastically changed how I thought about everything -- even outside of programming.

Premature Optimization Is the Root of All Evil

Donald Knuth said it best in 1974 when he wrote:

Premature optimization is the root of all evil.

Optimizing for the 5% is a type of premature optimization. Maybe not so much for your development environment choices, but certainly for the other cases.

Base your decisions on optimizing for the 95%, keep it simple and see how it goes. In other words, optimize when you really need to not because of "what if".

What are some cases where you optimized for the 5%? Let me know below.

Top comments (30)

Collapse
 
ben profile image
Ben Halpern

The other day, while working on someone else's machine, I found myself not being able to remember a basic shell command that I had been aliasing. I Googled it and found the answer. I felt a bit silly that I couldn't remember something basic I used to know, but it was easy to remedy. Aliasing hadn't decayed my ability to think through the problem itself.

I think there are some who over-optimize, and that is to be avoided. There's a sensible stopping point, but if you over-anything you're going to have problems.

Great post.

Collapse
 
nickjj profile image
Nick Janetakis

Thanks.

Aliasing hadn't decayed my ability to think through the problem itself.

Yeah and thinking through is the most important part. Now you get the best of both worlds. For the 95% you can use your aliases and be productive, and in the 5%, you knew what went wrong and it was a quick Google search away.

I think it all boils down to just doing things and letting the "doing" guide your actions. Instead of thinking about "I need 100 aliases set by 10am SHARP!", just pay attention to your own patterns. If you find yourself Googling the same command a few times, make an alias for it then.

Collapse
 
maria_michou profile image
Maria Michou

Couldn't agree more. When you notice a pattern of searching for the same things over and over, then you should do something about it and create aliases (if that's a simple command), shell aliases (if it's a series of commands) or even boilerplate code or modules, if you reuse the same components/functionality over and over.

Thread Thread
 
parambirs profile image
Parambir Singh

Over the years I've realized that keeping 'work notes' has increased my productivity. Having a bunch of text files in a folder that I can grep has saved me countless hours googling for solutions that I've already googled before.

I'm not saying googling is bad. But sometimes it can be a time sink. You know "replace all instances of word x with y" --> "grep cookbook" --> "my cool vim macros for finding/replacing text" --> "my better emacs macros for finding/replacing text" --> "how I manage my vim plugins" --> "look at my cool .vimrc" --> "Why I moved from vim to VSCode" --> .....................................

:)

Collapse
 
somedood profile image
Basti Ortiz

This is a very insightful article. I, too, have been a victim of premature optimization. Programmers have been "raised" to think that everything must be generalized; everything must be abstracted. By following that way of thinking, we have doomed ourselves with the points you raised in the article.

It drives me insane sometimes when I overthink about a certain feature I want to add to my code. I often get lost in the forest by thinking of the "best way to implement this feature for reusability, performance, and maintainability" rather than making actual progress.

Collapse
 
nickjj profile image
Nick Janetakis

Yeah, it's very non-intuitive too, because reusability, performance and maintainability are fantastic things to have.

The non-intuitive part is you only get there by making progress and uncovering the problems that lead to non-generic, slow and unmaintainable code but at the same time, some of those problems aren't even problems that need to be solved.

For example, I'm totally cool investing my time and energy into making a unique user system for a long running project I'm working on. It's not like I'm sitting there creating 15 new applications a day that each require a generic user registration system based off that single unique project.

Collapse
 
somedood profile image
Basti Ortiz

Exactly. It's just so strange how the supposed "good things" to strive for are the exact same things that may inhibit progress.

Collapse
 
jnschrag profile image
Jacque Schrag

This was a fantastic article, and I'm going to share it with my team to read. I recently was doing a code review with a newer developer and I noticed that they had made some design choices in how they wrote that code that optimized one particular section to the detriment of a couple other, more frequently used sections. My advice to them was to write code for the rule, not the exceptions. I hadn't considered how that applies more broadly to optimizing workflow, but that's definitely rings true in my experience.

Personally, I often try and over optimize before I've even begun. Every time I think of a new side project, I do what you described and try and make it generic and reusable so others can use it too. Consequently I get overwhelmed and rarely build any of the ideas I was thinking about. I really appreciate the reminder to not over-optimize in areas like that.

Collapse
 
nickjj profile image
Nick Janetakis • Edited

Thanks for reading.

Consequently I get overwhelmed and rarely build any of the ideas I was thinking about.

Yeah, this sometimes happens to me too. I think it partially stems from there being this trend where you need open source everything you work on from the beginning.

Instead of just getting your hands dirty and solving the problem you have first, you start thinking about trying to carefully code the perfect solution from the beginning because "but people are going to read my code!" or you get side tracked for 5 hours trying to decide on which license to pick.

By the time all of that happens, you throw your hands up in the air and call it quits before you even had a chance to write 1 line of code.

This is the web developer freelance business equivalent of spending a month creating business cards, an LLC and a fancy portfolio site when all you really need to do is talk to a potential client (which you could do in literally 10 minutes without any of that stuff).

Collapse
 
matt123miller profile image
Matt Miller

Every time I think of a new side project, I do what you described and try and make it generic and reusable so others can use it too. Consequently I get overwhelmed and rarely build any of the ideas I was thinking about.

I do this every time in my side projects! I get overwhelmed in architecting some ultra reusable thing up front to solve the problem I'm interested in. Instead what happens is I get lost in that forest of decisions and make 10% of the generic solution. I know I should just prototype the actual problem first and then worry about whether it's reusable later but I keep getting stuck in made up architectural issues. It's a difficult habit to break.

Collapse
 
tiguchi profile image
Thomas Werner

Great post!

Thank you for making a case for using a beefy single server instead of overengineering everything for the serverless cloud from the start. I agree 100% that it is better to thoroughly understand the challenges of scalability, how it applies to your own situation, or if it applies at all, instead of betting everything on the latest trending technology du jour that might lock you in and make you regret it down the road.

A single strong database and web server can go a long way.

I've been also making all the mistakes you mention in terms of software development. I'm 100% OK with customizing the heck out of my dev environment though :-D

My personal obsession with overthinking problems and trying to find the best approach was part unhealthy perfectionism, but also part the way I grew up with PCs and how I learned writing programs for them. I know what it means to write code for an under-powered platform where optimization was a necessity in order to get usable performance. There was little RAM, little (or no) storage. Updating the screen could be very slow unless you knew how to write directly to video RAM. Preferably in machine language. That became pretty much a non-issue over the years, but when smartphones, iOS and Android became a thing I felt thrown back to that time. Once again we were dealing with relatively weak hardware and its shortcomings that can be worked around with some trickery. I wrote a couple of graphics processing apps and hit the limits of Dalvik and the ARM CPU pretty quickly. Fixed point integer computation instead of floating point was necessary. Native code close to the metal instead of Java interpreted in Dalvik was the solution.

These are not examples for premature optimization though. It was necessary.
These examples are more of something that might have shaped my mindset over the years to look for ways to optimize things before I even run into issues just yet.

This, and also what Some Dood mentioned in his comment: our abstract methodical thinking were we are conditioned into generalizing things and finding ways to create reusable code. I also know that it can be incredibly appealing for developers to create reusable code for the sake of being reusable, as if that library was the product and not the application using the library. This is a trap I ran into time and time again. That's basically when I was caught up in programming land, and didn't care much about end user land.

In itself aiming for generalization and reusability is a good and useful practice. But like anything in life, too much of a good thing can turn into a bad thing.

Collapse
 
alainvanhout profile image
Alain Van Hout • Edited

I tend to follow a modified form of YAGNI: use the simplest approach that would not block you in the future, i.e. you don't code for the myriad of future hypotheticals but you do keep paths open towards potential solutions for those hypotheticals.

As to the editor discussion, I generally use a (good) IDE, which has lots of default yet optional features.

Collapse
 
rcosteira79 profile image
Ricardo Costeira

As developers, we’re constantly bombarded with information about how companies most of us admire (in terms of technical achievements at least) like Google or Uber did this amazing thing that solves all their problems. Since it’s in our nature to follow others by example, it is especially enticing to follow their lead, regardless of the domain we’re working on.

This post really opens your eyes on this matter. Keep your feet grounded, focus on your own domain, and solve your own problems in a way that also leaves the door open for the near and possible future.

Thank you for writing this.

Collapse
 
scottishross profile image
Ross Henderson

That is what my current (soon to be previous) work environment was like. Everything had to be 100% uniform. If it made sense to it to one app, we should do it to another app by default. It drove me nuts.

Collapse
 
ricardodnb profile image
ricardodnb

The part of researching the better architecture or dev stack to handle 1 billion requests has happened to me 3 or 4 times in weekend draft apps i developed.
Guess what, most of the time the apps were never ready to be used because of the what if's 😁

Advice: Code on whatever you know, get your software running and then if you need handle the optimizations!
Your app doesn't need to be fast from ground up because at the end of the first deploy it has only one user -> you

Collapse
 
fagnerbrack profile image
Fagner Brack

Donald Knuth never said “(or at least most of it) in programming.” These are your words or somebody else's. It's very unethical to quote wrong things and not even mention the source. Here's the original paper, please read it first, page 268: web.archive.org/web/20130731202547...

Having a generic understanding git/bash/ cli tools of the work environment is useful for a team like mine where we practice Mob Programming everyday to achieve 10x performance. We practice Pair Programming sometimes and solo for obvious tasks only.

All that doesn't lead us to over engineer or over design the code base on requirements that don't exist. You're more likely to do that if your work by yourself alone, and that's the main problem. We can be pretty lean and write only the simplest code in the direction of scale once that's necessary.

Only skills with practice and mentoring will give you that, it's not to optimise for the 95% because that's your knowledge of what 95% is. You don't know what you don't know.

Collapse
 
nickjj profile image
Nick Janetakis • Edited

Thanks for the heads up on the quote. I guess someone should remove it from his official "quotes" page on Wikiquote. It's listed at en.wikiquote.org/wiki/Donald_Knuth.

I changed it to what it is in the paper you linked.

Collapse
 
fagnerbrack profile image
Fagner Brack

Oh wait! That's a different paper. It looks like he said it somewhere else using different words. TIL.

The reason I raised the concern is that I've been caught on the trap. The internet can be misleading sometimes, see hackernoon.com/the-danger-of-relyi...

I didn't read that paper and I don't know where to find it. It would be good to check the original to see if that's exactly what's written there or link the source (even if it's wikiquote). I've seen multiple publications slightly changing the quote and then what we know today is a complete different one. Think an effect like the wireless telephone you played at school. Even website like Wikipedia get this wrong cause nobody bothers to check the source or the source is not available anymore!

Collapse
 
ohffs profile image
ohffs

One of the few things I remember from my college days was a lecturer who said 'optimise for the common case'. It's always stayed with me over the years and I try and keep it in mind when I find myself over-thinking or dwelling on 'what if...' :-)