A few days ago, I came across an intriguing headline on LinkedIn that displayed a bold statement: "Tech Company Aims to Increase Programmers' Productivity by 50% with GitHub Copilot." What made the situation even more curious was that the author of the post was not a programmer. Despite the audacity of the headline, I believe this issue can lead to an interesting analysis and discussion.
There's no denying the impressive capabilities of the AI tools available to the public today. In the field of programming, they are already significantly assisting developers, optimizing their routines and increasing productivity. With the right tools, problems can be solved at a faster pace. However, for those unfamiliar with programming, these AI tools may seem almost mystical, giving the impression that they can quickly and automatically solve any problem. But we know that reality is quite different.
The enthusiasm generated by these tools is fully justified, and before forming a misconception about me, know that I am also one of them. However, I believe that not everyone fully understands what these tools are capable of in reality. On one hand, you have beginners in programming who fear being replaced by machines. On the other hand, you have CEOs seeking magical solutions to reduce costs or increase productivity, perhaps without a clear understanding of the actual work performed by a programmer.
It's not quite like that...
"ChatGPT (or Copilot), create a page to sell tickets for a show in Node.js. The backend should be prepared for an average of 2,000 requests per second and be resilient to failures. Set up a load balancer to distribute requests among distributed servers. Ensure caches are functional and actually reducing server load. Instrument real-time monitoring tools. And of course, the database should be clustered with a good replication strategy."
The example is overly exaggerated, I know. But it illustrates the complexity behind implementing something that may seem simple to someone who is not a programmer. Writing code is only part of a programmer's job, and I'm sure generative AIs excel at that, but it's not everything a programmer does.
We, programmers, spend a significant portion of our day without writing a single line of code. It's necessary to first understand the requirements of new features, engage in conversations and adjustments with stakeholders, design the technical architecture, and finally translate all that information into code. And let's not forget that this code must be written in context, considering how it fits with the rest of the existing codebase.
What I mean is that these AI tools, although very useful, don't do magic. But they can be good assistants to programmers.
AI as a rubber duck
One of the readings that greatly expanded my perspective in software engineering was the classic book "The Pragmatic Programmer" by Dave Thomas and Andrew Hunt. One of the chapters talks about a curious debugging technique: the rubber duck.
The basic idea behind rubber duck programming is to explain the code or problem you're facing out loud, as if you were explaining it to a rubber duck. By verbalizing the problem or describing the code step by step, you often find a solution or gain a new perspective on the problem.
ChatGPT is excellent at conversing, and furthermore, conversing in context. Could this artificial intelligence be the evolution of Dave Thomas and Andrew Hunt's rubber duck? There are already extensions for Visual Studio Code that integrate with ChatGPT, using it as a rubber duck. You can check them out here.
I like how GitHub chose to name its product Copilot, and that aligns with the concept of the rubber duck. The tool aims to be a co-pilot rather than the actual pilot. It is the programmer's assistant, their rubber duck.
The harsh reality
This message is for those who are anxious and concerned about the future of their programming careers: relax! But don't relax that much, because the harsh reality is that it is the fundamentals of software engineering that make a good programmer, not just the code.
With the popularization of computers for the general public in the 70s and 80s, accounting and finance professionals felt threatened by spreadsheet software. A machine that could store thousands of rows and columns and never make calculation errors. Who would reject that?
It is true that spreadsheets were and still are powerful, and they did pose a threat to "spreadsheet writers" jobs. However, those who interpreted the data, understood the business context, and applied accounting concepts certainly knew how to use spreadsheets instead of criticizing them.
Programmers are not mere code writers. Don't fear your new rubber duck, instead, use it. And thank you, ChatGPT, for helping me putting this article together.
Top comments (1)
I agree that AI won't replace developers or engineers, but the example you gave about the nodejs backend could actually be a pretty good prompt for codeGPT because its pretty specific. What codeGPT is not so good at, and what is our primary job as developer, is to figure out the non-specific requirements. The difference between a good developer and a bad developer is whether they figure out these non-specifics, before or after they've made an architectural choice.