Hi, I am working to become a tech lead at my company and my manager posed me this question:
If I become a tech lead, should I allow my teammates to make minor mistakes so that they can learn from them?
His reasoning was that people will learn much more from a mistake than from direct guidance, and that we could take on the cost of a small mistake, so that the team can grow.
What are your thoughts on this?
Top comments (7)
I get the idea and sentiment behind allowing people to make their own mistakes so they can learn from them. However, in a typical team setting for software development there would be usually code review taking place, which is meant to catch mistakes before they make it into the code base. The same also applies to pair programming, where both devs catch each others' mistakes.
I don't think it's a wise idea to let those small mistakes slip on purpose during the process, especially since it can be sometimes hard to gauge what a small mistake really is in software development. Sometimes small mistakes can snowball into huge problems in production. Sometimes the reviewer doesn't quite see the greater context and might underestimate the impact of a small appearing mistake.
Given a constructive and healthy code review process it is important to explain why something is potentially a mistake and give a bit more context. This is where learning from a mistake can happen, while the mistake is being caught and corrected.
Also what would be the alternative that would allow a dev to learn from their own mistakes that make it into production? If they are not the only developer on the project, it's possible that a team mate cleans up after them. So to allow the dev who originally made the mistake to correct their mistake and learn from it, we are enforcing a "blame process". Someone needs to identify the faulty commit or line of code that broke something and then call out the original author of the bug and make them fix it. I think that is unnecessarily harsh and potentially embarrassing. If this would be the only avenue for fixing mistakes it could lead to a hostile work environment. I think once something is committed into production it should become the responsibility of the entire team, not of a specific person who can be called out and blamed.
However, as a team lead I've been working in a setting where I do both: code review for trying to catch mistakes early on (it's not always possible). And auditing committed code and pointing out to the author that they made a mistake in case that mistake is a little more serious and breaks something. That auditing part is awkward and uncomfortable for me, and I only ever felt compelled to do that when I noticed a pattern where the dev was repeating the same kind of mistake that was already pointed out before during code review. Other than that, if it's a small one off, I quietly fix those bugs myself when I find them.
Hey, thank you for the thoghtful response.
I whole heartedly agree with the why part. If we want people to learn from their mistakes we should definetly explain why we think something is a mistake in the first place.
I also don't think that I should allow a mistake to go into production.
I guess the point here is how can I encorage people to learn, grow and make their own decisions without me having to look over their sholders all the time to see if they are doing the "right" thing in my view.
Gotcha! My team works home office, so I don't deal with that situation at the moment. So basically the only point in time when I notice problems while my team mates are working on something is when I'm on a pair programming call with them. That's more of a collaborative setting though where both sides can make mistakes and it's OK and definitely not too much to correct each other.
As for an office setting, I agree, you shouldn't feel obligated to intervene while your devs are working on something and be able to step back and delegate as much as possible. That could come across as micro-managing and would probably backfire in the long term.
I guess it's really about finding some balance between trust / letting go of control and ensuring that your product is in a good state and allows for easy continuous maintenance. It's also a balance between the individual needs of the devs, their personal growth, and the "greater good" of having a reliable product that meets the set quality standards.
The quality and system health aspects can be covered to some extent by being diligent about writing unit tests, using linters and by enforcing those automatically when committing something. A range of beginner mistakes can be automatically caught by that, which allow the dev to correct themselves and also learn from it (e.g. "why did my modification make that test fail?").
But unit tests and linters don't catch situations where something could be done in a different, improved way. That requires the experience of someone more senior.
And yes here it can get problematic when personal opinion and preference of a more senior dev plays a role. That might cloud their judgement when it comes to letting some minor issue slip.
To address that, I think it's also important for the team to set the expectations right from start and come up with a set of code style requirements and coding conventions that define how the code is supposed to be written. After all, software development is a team effort, and when going inside the code base it shouldn't be obvious who wrote a particular method or function just by looking at the code style.
By setting those standards you can basically avoid any discussions and arguments about code style related issues and focus on more pressing things. Some of that can be also covered and enforced by linters.
When it comes to improvements that have nothing to do with code style and personal taste, but really with something that could affect the performance or readability of a feature (e.g. redundant "if-else-if" checks that can be reduced and simplified), then I would say: go ahead and explain and correct. I think it wouldn't be fair towards the junior dev to withhold that wisdom and let them try to figure that out. It can take many years to get there.
It's probably more of a matter of "how" to do that, so it doesn't come across as condescending or overbearing.
But if the right expectations are set from the start (e.g. have guidelines in place the whole team agrees on), and there is some setup within the team where (several) more experienced developers are assigned as mentors to more junior developers, then this could work as a learning experience, all while making sure that mistakes are caught and corrected.
In that regard I have to say that I was actually yearning for a mentor when I started my software development journey, and I wouldn't have minded being corrected while I was working on something. When I was younger I had a longer internship at a big software development company. They did assign me a mentor. But he never had time for me, so I was mostly on my own there, which wasn't great either, and there was little I took away from that experience.
Have a conversation w/ the folks first. People learn differently. Assuming people will learn from a mistake instead of shutting down or walking away is kinda dangerous.
Having said that, yes people have the potential to learn more deeply when they are involved in the issue. They have a stake in the outcome, so they are more likely to learn.
Guiding folks while sounds good, also creates a dependency on the person giving advice and a learend helplessness on the person receiving it.
It takes a lot of practice to find the balance here and everyone I've ever mentored has a different mix that suits them. So have a conversation about how they'd get the most out of your support. Set boundaries in place so the risk isn't too high if they go off the rails, and when you're done see if that was effective.
I like the method of challenging the devs' assumptions and approaches and let them figure out where the gaps are by themselves rather than feeding them the answer. This a very common thing we do at Amazon where we try to help people grow organically by following a more socratic method. You can still give the dev full ownership and let them break stuff as long as the impact is not catastrophic (eg: legal issue or affects customer money), and a correction of errors/postmortem is highly recommended in the case of mistakes so that they learn what the root cause was and how it can be prevented in the future.
Personally, I don't like the idea to deliberately allow people to make mistakes that can be avoided.
Plus, I am sure that everyone (including me) will make a ton of mistakes on their own, without me having to allow or disallow it.
But he does have a valid point about the learning and fostering the independence of my teammates. The last thing I want to become is a bottleneck.
All devs make mistakes. Some of mine have been pretty spectacular and were not caught in code review. My team lead has a similar history. In our team we value learning highly so with that emphasis we all try to learn from each other and from each other's mistakes. There is also an expectation that you will own your mistakes and fix them. Which further enhances the learning from mistakes aspect.
That said no mistake is allowed to deliberately slip through. Instead the code review process is used as a chance to explain why something is a bad idea or should be done differently. We all learn from each others mistakes, not just our own.
In short, foster an environment where learning is encouraged and people are accountable for their work. That way you don't have to let mistakes go through to prod to see what happens.