DEV Community

Cover image for Why AI Can't Replace Developers: The Real Limits of AI in Coding

Why AI Can't Replace Developers: The Real Limits of AI in Coding

Michael Amachree on November 26, 2024

AI Can't Write Better Code Than Devs As artificial intelligence (AI) continues to evolve, it's becoming an increasingly popular tool amo...
Collapse
 
peter-fencer profile image
Peter Vivo • Edited

So true:

  • "The paper suggests that LLMs rely on pattern recognition rather than true reasoning."
  • "Treat AI like a junior developer and guide it through the process."

In my workflow, special when I work alone ( side project ), AI is a good to argue. So sometime capable to replace a talk with a collogues.

Collapse
 
peter_truchly_4fce0874fd5 profile image
Peter Truchly

Yes, in my experience the AI assisted coding is mostly reminding me times when I had to review and patch code after my younger colleagues. However, as it was with the colleagues, AI often knows some interesting library, technique or idea.
Certainly beneficial, does not solve for everything.

Collapse
 
dev_michael profile image
Michael Amachree

Yep, I use it a ton for my side projects, haven't used it much in large open source or collaborative projects yet, but I'm sure to try some day.

Collapse
 
cheetah100 profile image
Peter Harrison

You should add the caveat 'yet' to this article. Broadly speaking I agree that AI can be weak. I just had a bug in some code I'm writing and I asked Claude to identify the problem. The first response recommended making changes which didn't really seem to address the issue. When I challenged it it was able to re-evaluate and actually identified the issue. The thing is I wasn't able to understand where the issue was occurring just by eyeballing the code. The reason is that the bug was a second order effect in a browser. Claude did manage to identify it. But could it write a whole system? Obviously not. yet.

Collapse
 
dev_michael profile image
Michael Amachree

I’d love to agree with you, but as mentioned in Apple’s article, given the current design, AI lacks true reasoning capabilities and won’t achieve it without a significant breakthrough. So, I’d say replacing human developers is still a distant possibility. The idea that AI might someday get there seems like an obvious stretch—one the article already addresses.

Collapse
 
cheetah100 profile image
Peter Harrison

Oh man. Here we go again. Is there some kind of objective benchmark we can test for in this "true reasoning" capability? A benchmark that 90% of humans pass but which AI consistently fails? The main issue with current models and the mechanism of training them is they use back propagation. It is a very expensive approach which results in crystalline models. Switch to Hebbian learning - no small matter - and suddenly you get local learning in real time and adaptive networks. Is that far away? Probably not given the value of what has been achieved to date. I 'lost' a debate in 2017 about how we wouldn't see systems like we see today for at least my lifetime. By lost I mean most people agreed my estimate was unrealistic. My estimate was 2020. Well, 2024 and we have machines which pass the Turning Test so convincingly they need to condition them to be explicit about being AI the whole time. Every time some proclamation is made about how we won't make further progress we just blast on past.

Collapse
 
bradstondev profile image
Bradston Henry

I think you're right that, right now, AI agents will not be replacing traditional coders. My experience is that even when give the proper context, LLMs can still give you code or responses that don't reflect your need, and a human touch is needed.

I think another big problem with LLMs in the long term when it comes to creative solutions (e.g. writing, coding, etc) is that their are fundamentally "non-original". They do not make new approaches or ideas, they only use old ideas to find solutions and attempt to give you the best answer from what already exists.

I think "creative codes", or coders solving problems in unique ways or solving new problems will always been needed.

Collapse
 
dev_michael profile image
Michael Amachree

Yeah, what I do believe is that they will be replacing most of the existing no-code website builders, though.

Collapse
 
mo-dev profile image
Mo Andaloussi

I think developers who do not use the AI tools are the ones who will get replaced because they will become slower.

On the other hand, working a lot with those tools, you will get rusty.

Collapse
 
dev_michael profile image
Michael Amachree • Edited

Yeah, you're kinda right, I mean I also use AI (for web dev) but depending on your line of work as a dev you probably can or can't use AI a ton, as such it's not fair to say everyone who doesn't use AI will become slower.

But depending on the task, and using AI can be a real help.

Collapse
 
mo-dev profile image
Mo Andaloussi

What I mean by slower is that every developer has to Google search for a solution or how to install a dependency... Small tasks like this are much faster with a prompt than doing a Google search.

Thread Thread
 
dev_michael profile image
Michael Amachree

I completely understand—that's why I mentioned it depends on your line of work. If your company uses proprietary software and doesn't allow its documentation to be indexed by Google or utilized by existing AI models, then using AI might be counterproductive because it lacks the necessary context.

As a web developer, there's already a wealth of publicly available information that AI has learned from, making it quite useful in our field. However, for areas like cybersecurity or network engineering, depending on the company's policies and the nature of the work, leveraging AI might not be as feasible or beneficial.