Skepticism about Large Language Models (LLM) and ChatGPT
Abstract: Top AI scientist, Yann LeCun is skeptical about Large Language Models (LLM) being the right path to achieving Artificial general intelligence (AGI) and believes they have serious limitations.
Large Language Models (LLM) will never be really smart
Top AI scientist, Yann LeCun, Chief AI Scientist at Meta, said the large language models (LLM) that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans.
Who is Yann LeCun
Yann LeCun is a French-American computer scientist working primarily in AI-related fields. He was a New York University (NYU) professor of Computer Science and Neural Science. He won a Turing Award for his work on neural networks and is considered to be one of the world's four largest artificial intelligence scholars. He joined Meta in 2013 and is currently the Chief AI Scientist at Meta.
What is Yann LeCun saying about LLMs
Yann LeCun is very skeptical about LLMs. His views are:
- LLMs would never achieve the ability to reason and plan like humans
- LLMs have a very limited understanding of logic
- LLMs do not have persistent memory
- LLMs can only answer prompts accurately if they have been fed the right training data and are, therefore, “intrinsically unsafe”
- LLMs do not understand the physical world
- The evolution of LLMs is superficial and limited, with the models learning only when human engineers intervene to train it on that information, rather than AI coming to a conclusion organically like people.
- LLMs appear to most people as reasoning — but mostly it’s exploiting accumulated knowledge from lots of training data
- LLMs are very useful despite their limitations
- LLMs cannot develop into superintelligence because of a lack of understanding of the physical world
- LLMs can't plan hierarchically
- LLMs are not smarter than a house cat
What is Yann LeCun offering as an alternative?
Yann LeCun runs a team of about 500 staff at Meta’s Fundamental AI Research (Fair) lab. Here is his approach to AI:
- he is working to develop an entirely new generation of AI systems
- could take 10 years to achieve
- working towards creating AI that can develop common sense
- approach known as “world modeling”
- focused Fair Lab toward the longer-term goal of human-level AI
- Achieving AGI is a scientific problem
- Fair Lab is exploring building “a universal text encoding system” that would allow a system to process abstract representations of knowledge
- focusing on developing radical ways to give machines "superintelligence"
Academic research in AI is a nice topic but big money is in question
While Yann LeCun is surely right when saying that LLMs have serious limitations, others are making millions of dollars selling AI services of, admittedly, imperfect LLM systems. And the general public is definitely impressed and interested in those services, and has no problem calling them AI systems, however imperfect they might be.
Yann LeCun is definitely right in his arguments and anyone who has been using ChatGPT for a while soon becomes aware of the limitations of that system and would agree with the presented arguments. He worded his arguments very nicely, and most users just felt “something is wrong with ChatGPT”, and now we have a scientist explaining to us what the problem is.
But on the other hand, the ChatGPT-4o system is at times showing “elements of intelligent behavior”, from fluency in 50 natural languages to writing elaborate texts/essays/reports, at the level of an educated person with superhuman speed and efficiency. And, very importantly, ChatGPT services can be quite useful. ChatGPT has set the bar very high, and when other companies are going to put similar AI systems/products on the market, they will need to exceed it.
Academic research is one thing but engineering products and services for consumers that make (big) money is something else. Pressured by the success of OpenAI’s ChatGPT, Meta AI decided to engineer some not-so-perfect, usable AI system, to fill the gap and need for AI services while we are waiting for the results of that 10-year research for perfect Artificial general intelligence (AGI).
The result is Meta AI’s own version of LLM, called Llama 3 ([7]), which would hopefully be able to compete with the popularity of ChatGPT. Llama 3 is released as an Open Source project, making Meta very popular in the Open Source community. Llama 3 is an Open Source software, so there is a question of how they make money from it since having a number of AI engineers on the payroll is costly. Llama-3 gets high marks from the community but is still below ChatGPT-3 abilities.
Rock-star Tech CEOs overpromising
We understand very well what Yann LeCun is saying. We live in an era of rock-star Tech CEOs who are presenting to us their technology products and it appears, deliberately or not, misleading us in promises of excellence of their technology. And at the same time, accidentally or not, high-tech companies' sales, profits, and share prices are skyrocketing.
Everyone remembers Elon Musk, CEO of Tesla, who promised Full Self-Driving (FSD) technology a decade ago, and it still has problems. For a decade it looked like it was “close” to the target but never finished and never really smart.
Similarly, Samuel Altman, CEO of OpenAI, is promising us very smart AI systems, with superintelligence within the next ten years (published 2023, see[5]). Yann LeCun is telling us we are being overpromised, and it is not going to be soon, not so easy, and definitely not with LLMs. In the meantime, ChatGPT-4 still hallucinates 3% of the time (see [6]) and ChatGPT output still needs to be verified/proofread by the humans. The previous version ChatGPT-3 had 175B parameters and the new version ChatGPT-4 has 1.76 trillion parameters and it still has the same problems as before with accuracy.
Conclusion
The search for Artificial general intelligence (AGI) continues. In the meantime we have imperfect, but quite usable and entertaining ChatGPT based on LLM, and the most important thing is not to buy into a “superintelligence” story until we can clearly verify it. And not to gamble anything important, like human lives, on the ChatGPT output, like we are sometimes gambling lives on imperfect different versions of supposedly smart autonomous/driverless car technologies.
References:
[1] Meta AI Chief: Large Language Models Won't Achieve AGI
https://www.pcmag.com/news/meta-ai-chief-large-language-models-wont-achieve-agi
[2] https://news.ycombinator.com/item?id=38972032
[3] Meta AI chief says large language models will not reach human intelligence
https://www.ft.com/content/23fab126-f1d3-4add-a457-207a25730ad9
[5] https://openai.com/index/governance-of-superintelligence/
[6] Leaderboard: OpenAI’s GPT-4 Has Lowest Hallucination Rate
https://aibusiness.com/nlp/openai-s-gpt-4-surpasses-rivals-in-document-summary-accuracy
[7] Meta AI: What is Llama 3 and why does it matter?
https://zapier.com/blog/llama-meta/
[8] https://www.nytimes.com/2024/05/29/technology/mark-zuckerberg-meta-ai.html
Top comments (0)