Software development gets passed down as an oral and written history of mistakes and learnings — and we wind up with a lot of "rules of thumb". Some of them are not as universally useful as some make them out to be. What are they?
Software development gets passed down as an oral and written history of mistakes and learnings — and we wind up with a lot of "rules of thumb". Some of them are not as universally useful as some make them out to be. What are they?
For further actions, you may consider blocking this person and/or reporting abuse
I'll start by saying that DRY (don't repeat yourself) is not entirely untrue, but an over-simplification to the point of harm as a principle.
Stop Writing DRY Code
Dylan Anthony ・ Apr 5 ・ 7 min read
Some conflicting ideas with DRY is the law of leaky abstractions and the rule of three which both definitely encourage skepticism of mismanaged attempts at DRY.
I think DRY means well, but IMO is often used harmfully as an idea.
"Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."
Repetition is always better than the wrong abstraction.
It took me a while to realize this but 100%
YAGNI is a good idea to have in mind, but I see it being used by grumpy programmers who want to win an argument.
"You aren't gonna need it" (YAGNI) is a principle which arose from extreme programming (XP) that states a programmer should not add functionality until deemed necessary. XP co-founder Ron Jeffries has written: "Always implement things when you actually need them, never when you just foresee that you need them." Other forms of the phrase include "You aren't going to need it" (YAGTNI) and "You ain't gonna need it" (YAGNI).
You can't just pull from a rule of thumb to win an argument against your PM. Have a real conversation.
DRY has been mentionned, so here would be the second: "When you have a hammer, everthing is a nail".
Some new tech actually needs an exploratory phase where, yes, you have to consider everything as a nail—until you figure out what is and what isn't. It's easy to come 10 years after everything has been figured out and say "You never needed React/Blockchain/Saas etc.".
But without prior knowledge, you need to hammer blindly at some point. Who would have though we would have email in the browser for sending large files? Well, have with your FTP then…
That's a really interesting take. I'd say some of the problem arises when so many people are trying to profit too early during the exploratory phase. A lot of hammer salesmen selling in to all the wrong markets and seeking a quick markup on their hammer investment.
Yeah, I agree indeed, some of this marketing bs is tiring indeed. But a lot of stuff is done because randomly try things for no reason. The hammer was most likely invented before the nail… :p
Maybe "Avoid Premature Optimisation". Like all these principles, they're well meaning and well founded but traps lurk within. It's easy to reach a stage where retrofitting the optimisation by the time that it's proved that it's actually needed is WAY harder than if it had just been planned in from the start.
Good point ... I think you need to think about it and plan for it, but not always implement it right away.
"Clean code".
People assume that the process for "clean code" is "code should be clean from the moment you try to make it work to the end". No. The very principle of clean code is "make it work, even if the code is crap. Then, once it works as you'd expect, then change it to make it clean"
This literally doesn't answer the question, but a really tremendous principle I was thinking about recently is "Principle of least surprise" — it's not prescriptive enough to be overbearing, but really has empathy for other developers and/or users baked in.
What an audience finds astonishing relates their background and general familiarity. So the principle only works relative to an implied audience which makes it somewhat subjective.
"Code should be written so that the most junior developer can understand it."
What utter BS
In the extreme it's important in environments where "developer fungibility" is valued.
In the best case it's motivated by the reasonable desire to minimize the bus factor in the worst case it's a sign of a culture of assembly line coding.
That said, if you have trouble understanding code you wrote three months ago perhaps it's time to dial things down a bit - it can be a tricky balance.
The saying is utterly dependent on the quality of your most junior developer :-)
All of them. As part of the human learning process, we all tend to take something that worked out well in one scenario and try it on everything. In the small and the large. That's when you see posts extolling only the virtues of a new (to the author) tech or strategy. Examples: DRY, microservices. Then many people try it and are plagued by undiscovered downsides. Then they post articles condemning it. Eventually we gain a cultural understanding of where it fits and where it doesn't. That's what the Gartner hype cycle is meant to measure. And often the corpus of articles on a given topic indicates where we are with it.
Single Responsiblity Principle (SRP):
"Gather together those things that change for the same reason, and separate those things that change for different reasons… a subsystem, module, class, or even a function, should not have more than one reason to change."
Kevlin Henney Commentary
The End-to-End (E2E) Testing term is used incorrectly.
Technically, that process involves testing from the perspective of a real user.
For example, automating a scenario where a user clicks on buttons and writes text in inputs.
That's why all the components get tested in that process (from the UI to the database).
If you're using a hack, it's no longer E2E Testing, because a real user would not do that.
A common example is when you're testing a scenario that involves multiple browser tabs (e.g. SSO Login scenario).
There are some libraries out there that cannot test in multiple browser tabs (such as Cypress), so in order to automate that scenario, you would have to pass the credentials in the header or remove the target="_blank" attribute from the element that you're clicking.
That involves a hack, and that means your test no longer mimics the exact behavior of a real user.
Another one from the testing world: Accessibility Testing
Most folks think that involves checking if your elements have the title attribute (for screen readers) and if the fonts and colors are friendly for users with visual deficiencies.
But Accessibility Testing actually just means making sure that your web application works for as many users as possible.
The major mistake here is that folks forget to include cross-browser testing in this category.
So, you might have 0.01% users who need screen readers, but you actually have 20% users who are on Safari and 15% who are on Firefox and maybe even some on Internet Explorer.
For completeness, I'll add: Unit Testing
Commonly misunderstood as proving a part of a system works, and thus the system will work, so it can be deployed, without that difficult E2E stuff! The problem here is similar to hacking E2E tests, the isolated unit under test is unlikely to experience the same stimuli as it would in reality / as part of the whole. IMO, Unit Testing is entirely for team members to provide assurance / confidence that they haven't obviously broken something while making changes, without having to run a full E2E suite locally.
IMO, prefer Consumer Contract Tests that provide a set of tests, defined by the consumer of a component, that express the behaviour they expect from it. Popular between more autonomous development teams, especially in a microservices environment that permits independent deployment of services/components.
Don't reinvent the wheel.
I know that it probably exists a package or library that does it better, faster, it's tested and maintained... but what if I don't want a new dependency? What if the library introduces more bloat than I want to accept? What if I'm trying to learn?
I think it's acceptable to reinvent the wheel when you don't like the wheels you find.
Yes. Pretty sure that if "The wheel" is "websites" - that we need be reinvestigating them a bit.
The Dunning-Kruger effect is often misinterpreted and not well understood. The results of the original study are criticised for being wrong in their calculation/interpretation and the subsequent buzz it created and all these citations contributed to solidify the myth.
This McGill article is a great read.
"There's always a catch" / "There are always technical tradeoffs" / "Faster, better, cheaper: pick two" This is true most of the time, but it's important to understand that in technology occasionally someone really does just build a better mousetrap, and it's really important to look for times when that happens because when it happens it means the other options are dead-end technologies.
I'm in discussions at work like, "should we keep putting dozens of apps on one managed dedicated instance or should we adopt containers?" There's really no serious conversation to be had there.
HTML Validation and Lighthouse scores and all of the accessibility best practices don't mean that your site is usable.