Aren't you worried they'll just replace you with computers?
In the past, I've always laughed off comments like this from non-technical friends and family. Programming is hard, after all, and Turing already proved that no computer can even tell if a program will halt or not, so what was there to worry about? Machine learning was neat, but it was mostly good at classification, not generating content.
But then came the latest wave of generative AI: DALL-E, Stable Diffusion, GPT-3, ChatGPT. These all blew away my perceptions of what AI is capable of. Granted, the content they produce is generally not that interesting: I like Ted Chiang's description of ChatGPT as a "Blurry JPEG of the Web". But then again: neither is most of the code being written today. Think about how many CRUD APIs you've implemented. Could an AI have just done that for you? Especially with a little supervision?
It's a sobering thought. But even before comforting ourselves by considering all the aspects of our job that an AI couldn't do (like clarifying requirements between multiple stakeholders, managing projects, or handling production issues), let's take a look at how good a job AI can actually do at coding. Specifically, let's see how well GitHub Copilot, a generative AI model powered by OpenAI's Codex, does at refactoring some Go code.
Our starting point: a Blog API
Here's a little code that we'll refactor with our eventual replacementCopilot's help:
All it does is create, list, and show simple blog objects in JSON form. But the code is repetitive, uses a global variable for the database connection, and doesn't have any data layer separation. Let's get started on cleaning that up.
Quick aside: This snippet is short enough that you could probably pass the whole dang thing into ChatGPT and ask it to do the refactoring. But most of us are working on codebases where that's not an option, so I will demonstrate a more targeted approach in this article.
Cleaning up repeated code
The "render an error message" block is repeated many times in this API, even as simple as it is. Now, this is an easy extraction that we don't need AI for, but let's see how it does anyway. I started by typing out a comment for the function I planned to write, and Copilot jumped in:
Uh... You're not wrong, Copilot; you're just unhelpful.
Better. I hit TAB
and save myself some keystrokes. Then I hit ENTER
and Copilot jumps in with a suggestion for the signature:
Hey, actually, that's pretty much what I wanted. Let's do it. I then hit TAB
over and over until Copilot has finished the function for me:
This isn't how I'd have written it, but I can also live with it. This is a common theme with Copilot: it can't read my mind (yet?), but sometimes what it comes up with still works. (That describes my coworkers, too, to be honest.)
Okay, now let's use it. There isn't a way to ask Copilot to go and refactor code for me (although I would not be surprised to see this in the future; keep an eye on Copilot Labs), so instead I'll delete the old code and see if Copilot suggests our new function instead.
Hm... no. Okay, we'll give it a hint by typing in the method name on our own:
The next time around, Copilot has a better idea of what I want (probably because it notices the increased use of the new method):
The rest of the way, I find Copilot takes a little longer to respond so I mostly just do the refactoring by hand. I would expect to see the tooling evolve to where the whole process could be one command, though, so it's too early to declare AI defeated by a little latency.
Separating out a data layer
Inline database queries aren't always a bad choice, but I like to at least separate my code that accesses the database from the business logic. Here, let's go ahead and refactor to use Gorm, a lightweight ORM library. That is, let's have Copilot do it.
We start by importing the Gorm libraries, so Copilot will consider them available (I hope):
import (
// <snip>
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
Then I type out a struct definition that I hope will work, and actually, it looks like it will:
Let's see how it goes:
Not bad, but it's a very simple model. Anecdotally, I had worse results when translating more complex models across packages in a real-life refactor.
Next, let's rewrite the database connection opening to use Gorm. This is another case where things get a little creepy, as Copilot is able to already guess my connection string even though the format is different:
Okay, now let's start refactoring. Copilot needed a little hand-holding here for two reason: first, it doesn't like when the code doesn't compile, so I had to set up my GORM connection using a different variable to avoid breakage during the refactor. Second, it was hesitant to use GORM (since it hasn't been used yet) so I had to prompt it with a comment:
With that, it was able to rewrite the method as I intended:
For createPost
, I just deleted the whole function body and let Copilot take the wheel. It actually decided to re-insert my prompt comment from earlier, not me:
Also, it's lost the HTTP 201 status code that the first version of the code had. Tsk tsk. Hopefully, this would fail your unit tests (not pictured here for brevity, of course 😉).
getPost
is made quick work of, too:
Take it up a notch
At this point, we've shortened our code by nearly 1/3rd and the refactoring is nearly complete. There's one last thing I'd like to do, which is to remove the db
global variable and instead bind it to the handlers at runtime (this is an intermediate Go technique that makes testing your code much easier).
To do this refactor, I'll give Copilot a hint at what I want to do by typing out the function signature and see what it comes up with.
Okay, well... that's exactly what I wanted. I have to admit, there are times when Copilot surprises me, and right now it's making a strong showing. For the next method, Copilot needs even less of a hint:
After finishing the last method, I updated the router definition to call the new handler-generating methods, and the refactoring is done (view the final code here). At this point, I'm happy with the state of this little API and reasonably happy with how Copilot helped me get it done.
Did Copilot save me time?
It's a bit of a wash here, because the example is very simple and I'm pretty efficient at refactoring, but I could envision with a bit more practice that Copilot could actually help me quite a bit. It does a great job at cranking out boilerplate, like JSON marshaling and unmarshaling. But because it can't read your mind, it needs some hints to get things right and, in the end, you might spend more time coaxing it to do your bidding than it would've taken just to code it yourself. (You might even get inspired to write a blog post about it, and then you're really losing time!)
I also found myself stumbling a bit because Copilot wasn't able to really "refactor" code -- it was just writing new code that I could replace the old code with. I think this is just a tooling issue, though, and one that will be solved soon: Code Brushes are already in Labs and work by having the model take in a block of code as input, apply a prompt to it, and overwrite the original code with the output. A custom Code Brush probably could've done a good chunk of what I did above with less effort.
Is AI coming for our jobs?
I don't think AI, Copilot or otherwise, will eliminate the need for programmers (or artists or writers). But there's a harrowing possibility that AI could reduce the need for programmers. What if two senior devs, aided by AI, can do the work of a team of four AI-less devs?
Then again, the same could be said about programmers now compared to programmers thirty years ago. Modern IDEs, tooling, and faster computers mean that we're vastly more productive (in a business sense) than our forebears. And yet there are more programmers now than there ever were. Increasing our capabilities with AI might just be a way to make us all that much more valuable -- not less.
Top comments (1)
I've been trying to replace myself with computers for two decades.