DEV Community

Cover image for What is the future of AI?

What is the future of AI?

Rita Brown on March 17, 2023

Recently, when I saw another article about ChatGPT, I thought, "What's the big deal of it? Why is everyone talking and writing about it?" In my att...
Collapse
 
leob profile image
leob • Edited

It's huge - the Fourth or Fifth Wave, after the Agricultural Revolution (when mankind left the caves and stopped being a hunter-gatherer), the Industrial Revolution, and the Information/Computer/Digital/Internet Revolution (that last "wave" should probably be seen as at least two or maybe three separate "waves").

Impact is impossible to gauge as of yet but it will be huge, huge, huge, just like the previous "revolutions" that I mentioned.

Collapse
 
brownrita460 profile image
Rita Brown

Completely agree! I can't wait to see what will happen next

Collapse
 
leob profile image
leob

Torn between hope and fear, lol ... as with any new development with a far-reaching societal impact, the potential for "good" is huge, but so is the potential for "bad" ...

Thread Thread
 
brownrita460 profile image
Rita Brown

That's what scares me the most

Collapse
 
360macky profile image
Marcelo Arias

I like that perspective how the fourth revolution

Collapse
 
idleman profile image
idleman • Edited

The hardest part in a machine is the software, not the hardware. It has been known in the tech industry for decades with few exceptions (think extreme conditions). ChatGPT and other AI improvements offer a solution.

I believe we are entering a new era in the same way as the personal computers had in the 80s/90s. It is likely things will emerge from it that we cannot foresee. If successfully implemented will AI tools be able to replace a huge numbers of jobs and it is stupid to think all those people will be able to find another job.

The result will probably become various countries must implement some kind of "citizen salary". Political correctness is another issue as well. Chat GPT has issues to being "neutral" so people will probably create and run their own version, decentralized from a huge state/company. It however create another issue, the one who control the compute power in the world will be the real person/organization in control when it happens.

A solution to the above is something like: "homomorphic encryption". but for AI. Preferable a decentralized network so a lot of AI(s) can share resources, but designed in a such way so no one can see what attributes a specific AI has in the network. This way can every human get a personal AI and select if it should have a (politically) conservative, socialist or liberal way at looking at stuff and so on. And no one, expect the owner knows it :-)

Collapse
 
brownrita460 profile image
Rita Brown

I support the idea that with development of AI the government must ensure that no one will get any damage (neither financial, nor physical)

Collapse
 
kralion profile image
Brayan Paucar

My opinion is in the same vibe , but yeah, AI tools can improve many aspects of differents industries, including AI Tools Building, and the part that you mention "humankind must be careful in its inventions" It must be imposed in the policy of the companies that create this type of tools.

Collapse
 
brownrita460 profile image
Rita Brown

That's true, my biggest concern is that governments of different countries may miss the moment when regulations regarding AI will be crucial for everyone

Collapse
 
wiktorwandachowicz profile image
Wiktor Wandachowicz

"in a few decades, when developers figure out how to insert AI into robots" - please don't do that! And I know that it will happen anyway, because in the end AI is just a software, interesting algorithms and math - so it's easy to put such programs in microcontrollers and more powerful CPUs. The question is: do we really need this? After AI-assisted electronics is given the possibility to control electric signals, mechanics, pneumatics, servomotors, etc. there is only a very narrow line between doing good and bad things. For example armies will surely take as much advantage of it as possible, both friendly and hostile armies. For "our protection" of course...

Half of the countries in the world do not benefit from industrial and scientific growth, they don't have sufficient access to clear water, food, education and health care. So people living there could be considered undeveloped, poor or even barbarians (!) to some degree. If applying AI here and there proves to be a success, such people will also want to reap their benefits too. How to do it when you don't have such technology at home? Buy for enormous amount of money? Maybe steal or grab AI equipment, factories and knowledge by force? You can imagine what may happen next, because of what we know about ourselves as human race.

How to avoid such black scenarios? Answer is both easy and complicated: share, cooperate and love each other. Then we may really enter another level in civilization development. Until this starts to happen I would be very cautious with applying Artificial Intelligence (a.k.a. advanced math and algorithms) in electronics everywhere without further thought.

Collapse
 
brownrita460 profile image
Rita Brown

I'm so exited to read such deep thoughts on this matter, especially the part when you say about countries that don't have enough technologies to implement and use AI on everyday basis. Bad scenarios must be well calculated by developers who create AI and implement it to different tools in the first place

Collapse
 
lkedves profile image
Lorand Kedves

I like your final thought the most: "The true challenge of AI is figuring out how natural intelligence works." It resembles an ofter referred but rarely read article by Alan Turing: Can Machines Think? (yes, that one with "the test"...)

But I would go further: a common ground should be a constructive, objective definition of intelligence. That would allow both 1: identifying the "intelligent segment"(!) of natural human thinking, and 2: its automation that, to me quite obviously, is not a gigantic chatbot adapting to the global average of the global human noise ("hand censored" to be less aggressive). That goes back to the forgotten origins of information science and that rabbit hole is really deep.

For art references, I would rather go with Asimov, Lem, or the Colossus trilogy from D. F. Jones (even the movie Colossus: The Forbin Project is really good).