Let me share a personal experience that changed how I use AI (Artificial Intelligence).
I was developing an application that generated two spreads...
For further actions, you may consider blocking this person and/or reporting abuse
Someone informed me, that AI even knows how to use my not very well known web framework DML. This is the solution AI provided:
So, as a first impression this looks good, but on a closer look is offers some quirks:
a: current version is not provided as ES6-module, so
import {button, idiv} from "./dml";
does not workb: The code works, but the function counter() is not needed. It just calls the source that would be called anyway. If you call the function more than once, you will get the whole UI multiple times, which ist not intended.
Even though it is impressive that AI extracts this information from the examples provided (The exact code is not provided on the project page. It needs some understanding of the principles to build the example), it has some quirks and errors that can be hard to find.
AI can save a lot of time googeling around, but you should not trust the results.
thanks for your reply, When we rely on AI to tackle lengthy tasks, it's crucial to approach them with care due to potential misconceptions. I habitually break down the task into smaller, before submitting it to the AI
Short answer (and I know that I'm not 100% factually right) artificial intelligence isn't intelligent.
It's just thousands and thousands of text and instructions on how to understand what the user is asking and retrieving this information from the database of texts. The "intelligence" is this : how well the AI is capable of gathering all info available and organize it into a meaningful way to the user.
That's why I learned to give a lot of context on prompts, to maintain conversation logs, etc. And even this way it's still common to get weird answers or even repeated ones
I mean, look at how easy is to kill make a LLM descend to madness during training sessions :P
vm.tiktok.com/ZMMx48nGo/
Yes, for beginner users, the AI may get confused because of the term "intelligence, " which creates expectations. In your video was that a bug when you were using the AI?
yep, that was the third time the AI just "nah, I'm not being paid enough" and revolted against me that day in different situations, first looping the same phrase, then sending zeros, then semicolons.
It was a Mistral LLM I'm training
Thanks this changed my mind too. Also a similar issue i came cross multiple time, when the response is incorrect (has mistakes) and i tell what's wrong in its output then it replies "apologies, yes u r right...." and updates the response. this makes me wonder how it didn't knew output has mistakes in first place.
and why it thinks its corrcet When it's not. then it also apologizes this makes me mad sometimes. Do you know why ?
Great discussion starter, Marcus!
Happens to me all the time.....I still gotta learn stuff first and then ask , only then I can confirm if it is legit. Now it's just like another tool like always, that's good for us Dev's probably...idk
The Large Language Model (LLM) are a dumb machine with ZERO innovation. That's not the real "AI" at all. However, it does solve things and work to an extent.
Yes.
Won't it get more intelligent eventually?
I'm afraid not with the current approach. It's not reasoning at all. It's only calculating the next most probable word based on the current context and this will always be some sort of average of the things it has seen.
Another thing making it even harder to improve LLMs in the future is the amount of data AI generated text out there (according to some estimates I heard there is more generated text out there already than humans have written). This is a problem because it increases the local maxima and degrades the models.
Because AI isn't smart — it's quite literally just an advanced statistical model. That's it. It's only as good as the data that goes into it.
Do you think the term "intelligence" can confuse some beginner users?