This article is the last part of a series of notes on Gergely Orosz's What is Old is New Again talk that attempts to put his predictions (that strongly resonate with me) into practical steps for smart software engineers.
"Finally, the elephant in the room: AI"
Few weeks ago my brother, who is software developer, told me that he is afraid that AI will replace us. While I understand his worries, I was never afraid of such scenario.
Yes, AI (let's call it by its true name: LLM - large language model) looks freaking smart when used for the first time. It "understands" you in almost any human language, can reply in that same language and can produce very appealing answer - be it text, image, structured data, code or even a video. It can seemingly consume all human knowledge into one gigantic mind and throw PhD-level answers in all industries at the same time.
Until you start using it daily.
Limits we don't want to see
A word hallucinations become almost as popular as AI, because LLM often outputs what looks like a competent answer, yet it is completely off.
On top of that, if you ever tried to use chat LLM for a longer discussion, you most definitely realized at some point that it stops considering your commands given earlier. This phenomenon is called "context overflow", and simply means that the LLM isn't able to hold that much context, although for a human mind it might seem minimal.
Context overflow, however, is just the symptom of much deeper problem, which is the cost of running LLMs. Every chat, image prompt or code suggestion is very, very expensive. We will eventually optimize this (e.g. Github Copilot is quite cheap for the value it brings), but keep in mind that most of LLMs today are burning money and given higher interest rate this will not last for long. ๐ฐ
The biggest problem of all is, however, the fact that LLMs train more and more on its own output, eventually collapsing into world of hallucinations. And while some LLM producers claim to be able to detect AI text, this is ultimately an endless race.
Not a first rodeo
Also, let us remind that this is not the first AI hype. You can read more about AI history here, here, here and here. Tldr; it is getting better every time, but I don't think this time is the ultimate one - especially because it looks like we already hit the plateau.
The decision maker
Now to my final argument. "AI" isn't gonna replace you, because it, in current form and shape, lacks two deal-breaker attributes:
It isn't creative, meaning it only outputs what has already been out there in the past.
And it cannot make decisions. I literally tried to force ChatGPT into making business decision by "acting as CEO", but it refused. It gave me some answers, but also emphasized that I should take them as suggestions, not decisions.
And this is the reason why it won't replace your creative job of architecting and building software. It does make us super-productive (thanks for that!), but it isn't nearly close to becoming a decision-maker you, as a software engineer, need to be basically constantly.
So don't worry about your job. Instead, embrace the LLMs and become more productive thanks to them.
Bonus: "AI" is already everywhere
From fraud detection, recommendations, predictions, digital assistants (Siri, Alexa), translations, healthcare, advertising, weather, etc. - usually just called neural networks, world is full of AI that is not LLM, but a serious business value tool nonetheless. Let's not forget to look at the bigger picture in times of unease.
Top comments (0)