Growing up as software engineers, we typically spend the first few years of our careers studying various technologies and concepts from the programming area - what a list is, what a hash table is, what's JavaScript, what's HTTP, what is a client-server architecture...
The goal of this venture is for one to accumulate the necessary skills & knowledge to land their first jobs.
But apart from understanding "what" something is, over the years, the preliminary knowledge required to land your first job has expanded to also cover topics which help one not just write software that works, but also write "good code".
Most aspiring software engineers nowadays are advised to read up about the SOLID principles, about what "clean code" is, they're also taught to split up their applications using the MVC "architecture" and have separate models, views and controllers.
Concepts vs. Principles
The difference between learning how a linked list works and what the S in SOLID stands for is that the former is a concept which is easier to understand as it solves a problem much more familiar to a beginner - it helps one efficiently store data in particular use-cases.
The Single Responsibility Principle, on the other hand, is a principle which has been carved out in our software engineering books thanks to the deliberate efforts of generations of programmers who've accumulated years of both good and bad experience and they've distilled what they've learnt into a single-lined principle.
But the simplicity of it is deceptive.
Memorising a single-lined principle and a few examples of applying it it simple and anyone can do it. Any junior developer learns to cite the S in SOLID by heart before their first job interview.
However, it is much more challenging to understand it.
To understand and appreciate the solution, one must first understand the problem it solves.
Then, you can look at the principle as a clever approach to dealing with a common problem.
Trying to use a principle without understanding the problem it solves is like trying to use a hammer when you don't know what a nail is.
But we as a society, have always tried to circumvent the long and hard road to true understanding. We're always looking for the shortcut.
Hence, we are teaching what SOLID is quite well, but we're not teaching why it exists or what problem it solves nearly as well.
Suddenly, you receive a hammer and everything looks like a nail you can hit.
Good and Bad Practices
Thanks to all this, we've developed the concepts of good and bad practices.
We've come so far as to be disgusted from certain patterns in our codebases. Using a global variable or directly initialising a dependency in your class instead of using dependency injection is considered a sin.
Why?
Because it's a "bad practice". It is not "clean code".
However, in programming there are no perfect choices. There are only trade-offs.
Every little idiom or witty technique you apply in your codebase has a cost associated with it. Most often, that cost is in increasing the complexity of the codebase.
After you do this too many times, it suddenly takes an hour to initialise a simple utility class you need for a small part of the larger task you are solving.
You want to directly construct a new class? No, you can't do that. You should use a factory instead. But wait, to make things even more sophisticated, we'll make it a "builder factory" so that the code is "extensible".
This vicious cycle continues to a point where you have a small project, for which you need people with PhDs in software design to understand. And I've actually seen that.
I've worked on a giant project with half a million lines of code, which is easier to understand than a 10k lines project.
That is the price we pay for over-engineering by blindly following the "good practices".
Oh, and by the way, this problem extends beyond code. It has become prevalent in our system designs as well.
For example, most projects built with a "micro services" architecture I've seen don't really need micro services. They could have just sticked to the not-so-glossy monolith and saved themselves dozens of problems, which the shiny buzzword brings with itself.
Dealing With Imperfect Principles
Although I've been painting a not-so-rosy picture of all the "good practices" in this article so far, note that there is nothing wrong with the principles themselves. They have become popular and widely adopted because they are quite effective in solving some of the problems we often face while writing code.
However, by not understanding the problems they solve and the problems they introduce, it is not easy to recognise when to use them and when to avoid them.
Therefore, before applying any sort of design patterns or good practices, read up about them. Don't just pick up the first vague article which either simply explains how to use them or, worse, try to sell them by outlining all the benefits while avoiding the costs.
What's even better, try to figure that out yourself. Oftentimes, techniques for writing good code address the problem of making your project more maintainable.
What that means is based on context.
Sometimes, making your project more maintainable means writing code which is easy to test. Other times, it is about writing code which is easier to read.
Some techniques contribute to more specialised use-cases. Such are, for example, making the frameworks and peripherals of your codebase more detached from the rest of the project.
And note that certain techniques might aid you in some of those goals, while impeding you in others.
For example, applying dependency injection might make writing tests easier, but it makes initialising and using your components harder. Is it worth the cost?
It depends. Sometimes it is, other times it isn't. You are the one to decide as only you understand your project well enough.
A Case Study
Oftentimes, sticking to simpler idioms, condemned as bad practices, might actually be more beneficial in your use case.
Check out Go's standard library http package for example. In it, there is a global http client variable.
Perhaps some bells start ringing right now - that makes the package harder to test, it raises the risk for thread-safety issues, it breaks encapsulation, etc etc.
And although most of those claims might be true, there is another thing which also holds true - it makes using the package much easier for simple use-cases.
Thanks to that decision, in order to make a simple GET request, all you need to do is:
http.Get("http://example.com/")
That's it. There are no initialisations, no configurations, no need to look into how to configure your timeouts & headers. You just stick to the reasonable defaults which the global variable gives you and in most cases, that works just fine.
This is an example of how, for the Go team, the widely adopted wisdom of avoiding global variables didn't apply to this specific use case they were aiming for - making the http package easy to use.
Conclusion
In software, there are no silver bullets. Not even the globally-recognised principles for writing good code.
Every pattern, principle and idiom is created with a specific problem in mind. And all of them have a certain cost you'll have to pay for applying them.
In order to use them effectively, you should examine both sides of the coin.
Otherwise, you will have to continue your endeavour in software engineering by blindly following certain patterns while condemning others as bad practices.
At that point, decision making will no longer be about making rational choices, but more about maintaining religious beliefs.
Top comments (0)