I've always been fascinated by evolution, human evolution in particular. But more precisely, how the things we do and the behaviours we have today ...
For further actions, you may consider blocking this person and/or reporting abuse
I think AI could be a very useful tool, when not abused.
The breakthrough already happened, we cannot bring AI back into it's box - so we might as well use it to ASSIST us as humans and elevate the abilities of our creative minds.
Currently AI is being (ab)used to completely replace the human consciousness - aka to generate content, articles and artworks just for the sake of generating them and not for the sake of human expression or added value. Everything is a mishmash of everything, a "laundromat" of content, being washed and rewashed and rewashed for the sake of consumption. So yea, currently it's pretty bad.
On a more positive note - us who will keep being genuine and come up with purely human content that is meant to express the human psyche and bring added value, will eventually be the exception, - so we will stand out, and the wheel will turn once more.
Recently, I watched a video from Jim Kwik, the brain coach, and he suggests to use AI to extend our HI (Human Intelligence), not to replace it. I adopted that saying as a mantra too.
Sounds interesting, I'll look him up.
That would be cool, but we can't control how that will affect us in the long run I think!
Let's hope some of us will keep being genuine and human!
I agree, AI should be treated as an extension of ourselves (the way we treat our phones, cars, and other devices) but it should never replace the human will. We need to evolve alongside it as a single species, rather than two competing ones. As long as AI doesn't gain human consciousness (whatever that may be), it can never replace us or even come close to emulating the free willed soul of a human being.
It depends entirely on what we do with our spare time, and that again depends on what we as a culture value/appreciates. The philosophical traditions that started in Greece over 2500 years ago, where due to a combination of (historically) much spare time and an appreciation of knowledge, discussion and seeking of the truth.
Right now, it seems like the (western) culture have more materialistic values, and we fill our spare time with entertainment and experiences, rather than debating philosophical issues. So perhaps we will turn into "consumers of entertainment", and develop skills in observing and judging human relations, interactions and drama?
Interesting. Maybe if AI starts creating the entertainment, we won't even need to do that ourselves...
Yes, but we will still consume entertainment.
Yup, I meant that we might lose the need to create it ourselves, just consume it!
It could be the end of natural "evolution". Honestly could be the gene editing/cyborg era.
Not expecting this in 2024, but in the timeframe of evolutionary terms, I feel like this is a reasonable outcome.
Good point, it does indeed feels reasonable. If we don't modify ourselves or become cyborgs we might not be able to adapt!
I just wrote a blog about this yesterday because ChatGPT was down... Seems like others here are thinking the same thing :).
AI fatigue is a real thing, and we're only going to rely on it more in the future. I don't think humans will stop thinking for themselves because AI actually needs our smarts to get better. It'll be a tool, one of many, that can help us, not something that'll make us crash. Using anything too much is usually a bad idea, but when we use it right, it can be super helpful.
It's an interesting thing to think about, and somewhat scary 😅
I agree that we will still need to create, train and improve AI ourselves. But, there might be a point where it will do that itself and won't require us to do anything really. And even if it requires some of us to do it, it will be a very small part of us.
Good question! I like theoreticals like this, and all of those scenarios you've given is very well plausible, but an unexplored territory you haven't mentioned is the possibility of rejecting AI. Like the other comments have pointed out, the current situation with AI is a content-generating machine. Whilst some people are celebrating the potential to cut costs and the easier creation of content, I would assume most people dislike this a lot and some even fear it's inevitable potential.
Of course, there are some really good use cases too, just like the internet with all of the problems that it brought, it still gave us the biggest libraries and forums of information the world has ever seen, and with AI, we can now create a synthesis of information that we can command to do whatever we want with that library.
Now back to the rejecting AI part, I feel and hope that more and more people will not support the current trajectory of AI, and in the future that it would be the case as well. I for one do not want to read a blog post without anyone behind those words, that the thought-provoking article I just read came from a 3-second query some guy wrote that he only skimmed.
I guess where I'm going at here is the "curated" rejection of efficiency, some tasks are just gruesome work that needs to be as efficient as possible, while others do not need efficiency at all. Think of a speed-runner doing a blindfolded run and succeeding, he could've gotten faster if he did not have it on, and yet a lot more people seem to be watching him and even going absolutely insane when he does succeed, or become very upset when he fails, even more so than if he just did it the more efficient way. Scenarios that are just like this example is the thing I don't want taken from us.
TLDR; In my perfect world, people will willingly struggle than to let AI handle that struggle for them, but only the good struggles, bad struggles will get automated away.
Very interesting, very interesting. I guess it's not that uncommon. Take digital drawings for example, its easier to create, but artists still decide to use traditional methods.
interesting opinion
It's fascinating to mix the AI abilities with the Humans. However, an AI is just a dumb machine. It has no language understanding like humans nor does it have a conciseness or emotional understanding like humans. Although the current LLMs that you see seem to be super intelligent or smart, they are just dumb machines that are smart enough to predict the next words based on the statistical relationship. It's all about the transformer architecture. We all need to be proud of the AI capabilities, however, it cannot be equally comparable to the human abilities.
For now. That's why I said in the case that AI becomes truly useful and really replaces our need to do certain things. AI is currently in very early stages, but you could imagine it getting extremely powerful in let's say 100-300 years, or even sooner.