DEV Community

Cover image for Stop Gaslighting Me - AI Won't Replace Human Devs Anytime Soon
Nick K
Nick K

Posted on

Stop Gaslighting Me - AI Won't Replace Human Devs Anytime Soon

I have seen lots of recent discussion that goes something like this:

AI is going to build your whole software product and you need drastically fewer people

I feel like I'm being gaslit every time I read this and worry it makes folks early in their software development journey feel like it's a bad time investment.

Note: there was a long conversation on HN about this post you may want to read here.

I'm a developer myself (2k+ commits 3yrs in a row) and have been working a TON with LLMs at Trieve since they came into vogue 2 years ago. I know most all the "AI engineering" tricks; especially wrt retrieval.

Let me walk through the story of a recent PR:

Trieve's code-base was originally intended to work for a single-tenant application called Arguflow. Due to this, we spent very little time writing a quality API since we thought the only application consuming it would be our own.

Eventually, we became an API-first business and this changed. We made V2 response types for our most commonly used routes which were much cleaner and easier to parse.

Unfortunately, this made our OpenAPI spec have a large number of ENUMS which clutters SDK's generated via the spec (open-source or not). I have known this is a pain point for a long time, but recently the yournextstore team brought it back to the forefront of my attention and I resolved to fix it!

It was fairly fixable thanks to hey-api (by
@mrlubos) being awesome and supporting manually defining routes on top of generated types ๐Ÿ™.

LLM'S ARE NOT AT ALL CLOSE TO BEING ABLE TO DO THIS KIND OF WORK!!

I had to open our lib.rs file, find the routes with V2 types, look at their response type definitions, remind myself of the name for the V2 response field on the Enum, change that line in each of the offending routes, and finally update imports. Not a lot of changelines, but a lot of domain expertise.

After my PR got approved (see it here), I then needed to publish a new release of our client and PR to yournextstore to get rid of now unnecessary code determining the type between V1 and V2.

  1. LLMs cannot update the SDK code to V2 response types in the first place
  2. LLMs cannot successfully configure a CI action such that I won't have to publish myself
  3. LLMs do not understand the downstream effect of fixing the problem will be updating yournextstore's code

Please stop gaslighting human devs into thinking AI is going to make their expertise irrelevant. We are a long ways out from that reality.

Top comments (3)

Collapse
 
hwertz profile image
Henry Wertz • Edited

Indeed. And even if some LLM could update the SDK code to V2 response types, etc., someone would still have to carefully go through and make sure the code actually does what one thinks it does, make sure there aren't just blobs of hallucinated code in there (presumably code that compiles or passes runtime checks... you'd notice it right away if the code didn't even build or run...but are nevertheless nonsense), and so on. (Honestly that's my biggest concern -- people use AI code that looks OK, passes a few cursory tests, but is riddled with weird behavior and bugs that could be hard to fix depending on the quality of code produced and the size of the code base -- after all, in this case, there's zero human programmers that have any experience with this code base.) And even then (if there's no problems with all that stuff), one has to be able to specify what they want the code to actually do precisely enough if they want code that does precisely that. A non-programmer will say "Well, I want a table here", a programmer will realize "What kind of table? What info goes in there? Is it a table that the user can expand/contract, sort columns, etc., or just a fixed table?" and so on, just for that trivial example.

I will note, as a bit of a computer historian, one should look at the hype behind 4GLs ("4th Generation Languages") that came out in the late 1970s through around the mid to late 1980s. They were sure these would make programming a matter of just describing what you want done in more or less plain English, that it'd make programming at that point so easy you wouldn't need programmers, that just whoever could describe a problem and it'd write up code for them. The reality was that you ended up with essentially domain-specific languages, usually with a database bolted on, that weren't quite easy enough for a non-programmer to use anyway (and, honestly, no matter how easy it was, plenty of non-programmers can't actually describe what they want a program to do in anywhere near specific enough terms to expect a sensible result). And programmers found these clunky, slow, and restrictive compared to just coding it up in some ordinary language. These often either didn't result in a standalone binary at all (it ran inside a special environment), or if they did it included a large, slow, and memory hungry runtime in it (essentially bundling the special environment with the application). LLMs are more flexible than 4GLs were, but the hype is strikingly similar.

I'd also be concerned, in the long term, if more code was AI-developed, what would future LLMs train on? I mean, they in present day were trained from code from github or stackoverflow or whatever... having future LLMs train off of LLM-generated code just doesn't sound like it'd end well to me.

Great article!
--Henry

Collapse
 
trplx_gaming profile image
Gabriel Ibe

Spoken true facts ๐Ÿ‘Œ๐Ÿผ

Collapse
 
mrlubos profile image
Lubos

Thanks for the mention! I worry about that โ€œfairly fixableโ€ part, let me know if anything could be improved in the package to make your life easier