Leading AI academics and industry experts - including Steve Wozniak and Elon Musk, published an open letter today calling for a pause on developing more sophisticated AI beyond OpenAI's GPT-4. The letter cites risks to society and humanity as a major concern and asks for the pause to enable the industry to develop shared safety protocols.
Do you agree with the consensus of the experts? Is a pause even a realistic option when you factor in global politics and capitalism? Share your thoughts below!
Top comments (82)
Excerpt from a recent post I made on the general topic:
My post is not all-together coherent, as I'm having trouble totally wrapping my head around all of this (as I suspect others are as well).
But I definitely see merit in some serious discussion about this. I'm a little too young to really have a sense of how things went down at the time, but the Internet itself didn't just happen without a lot of debate and policy, and I think we have to welcome this kind of discussion, and hope it leads to some healthy discussion at the government level (though that doesn't seem to be likely).
I'm not personally clear on the merits of a "pause" vs other courses of action, but I think it's a worthy discussion starter.
I think you're onto something with the chaos narrative, it aligns with the sentiment I've developed reflecting on the impact of social networking and mass connectivity
Spot on - social networking has had huge and far-reaching consequences (most prominently negative ones) which not many people foresaw at the time, back when Facebook introduced an innocuous-sounding platform allowing people to share their cat photos and the like with family & friends - I mean, what could possibly go wrong? ;-)
Social media is THE perfect example to look at with regards to “what can go wrong will go wrong.”
"Creating chaos can be easier than preventing it..."
Sure. It generally is easier to break than to build; to become an 'agent of entropy', in a sense.
This seems like relevant precedent, albeit simpler times.
The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses both to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints. In 1987, the FCC abolished the fairness doctrine, prompting some to urge its reintroduction through either Commission policy or congressional legislation. However, later the FCC removed the rule that implemented the policy from the Federal Register in August 2011.
I'd say I'm most concerned about the LLM work happening that we don't know about.
Surely OpenAI and GPT-4 is the centrally important figure here, but I want a better idea of what is being worked on holistically.
This is definitely where my mind went. I feel like it's a really hard thing to convince folks to slow down on developing something once it's already in motion, even if we know it's potentially dangerous. People can see the power in this tech and unfortunately, greed and the desire to be the first and capitalize on this kinda stuff, often trumps caution and thoughtfulness. I worry that people aren't going to slow down.
As for the "global politics" point, one thing about computers and information technology is that it's becoming easier and easier for everybody to access. This is generally a great and awesome thing, but it also means lots of folks are empowered to work on this independently. It doesn't necessarily tak a lot of resources — if you have a computer and do your research then you can work on AI. It's pretty easy to connect with other like-minded folks online, and you could build a team or find an open source project to contriubte to. Now, I'm not totally well-versed in this space, so I imagine you probably need access to pretty powerful computers in order to efficiently experiment with and train AI, but still, computers are always getting more powerful and this tech is becoming more and more accessible to all. There are relatively few barriers for those that want to work with AI.
I sincerely hope that we take a collective pause and think through the rammifications of this stuff before moving forward. I think diving into a space like this without any shared protocols or regulation is dangerous. And even saying that, I'm worried that regulations will be hard as hell to enforce given, as you mentioned, capitalism and global politics, but I think it's very important that we try.
This is where I wish I had some context on the volume of resources to run GPT-4 in a certain capacity. While I want to naturally assume that it may be at a scale which prohibits accessibility, I also realize we have an industry of crypto-miners with the type of resources that could potentially be repurposed under the right circumstances - either by the mine owners or someone buying up mining resources.
To be clear, I have no context on the amount of resources necessary to effectively run GPT-4. But you make a very good point — in the age of crypto-miners, there's a lotta folks out there armed with incredible computational power!
From what I understand, this sort of algorithm doesn't distribute well. Crypto miners need to math a single bit of data as fast as possible. GPT needs each calculation to operate on all of the billions of parameters repeatedly, so memory latency is paramount. That's why they aren't just jacking up the parameter count faster - GPT-4 required improving the supercomputers that they run it on (to oversimplify, they needed more RAM). This is why the devs toying with LLaMA are focused on quantization.
Ooo good to know and thanks for chiming in with this info — makes sense!
I think that the frighteningly rapid conversion of it (GPTx) into a closed source, for profit product was highly irresponsible, and a pause on the rollout of these really big models would definitely be a good idea, but I fear the horse has already bolted.
Much more work and attention needs to applied to mechanistic interpretability. A lot of what is going on now seems quite far from serious science or engineering, with financial gain being the key motive. There needs some serious reflection, given the power of what is being developed.
As a proud capitalist - I don't disagree with the profit motive, but as an engineer, I could also see a narrative where they set out to achieve what they weren't sure was possible
Regardless, if AI is able to operate beyond a closed system (ex: paying a Human to bypass CAPTCHA) it's very much time for some review and analysis
No one's going to pause. And frankly, this should've been something done with social media ten years ago. Look at the mess of false advertising and political falsehoods constantly spread on social media without any real laws stopping it, and even when something is done, no one in charge gets into any real trouble.
Remember in 2019 when the FTC fined facebook a whopping $5 Billion?
Facebook's revenue was $70 Billion that year and has only gone up each year.
There is no real oversight at the global scale, and there will be no pause in AI development since even if the corporations involved sign agreements saying they'll stop, they will all be secretly doing it anyway. In the end the fines won't even dent their profits if they create an actual Artificial General Intelligence [The next step beyond AI is an AGI which GPT-4 is apparently showing some signs of]
There will be no pause. The repercussions won't affect the rich or the corporations anyway, so why would they? It will only affect the general population, most of whom don't even have a clue what machine learning or artificial intelligence even really is.
This is a great (long) online book, and goes into some of the stuff you mention:
Table of contents | Better without AI
How to avert an AI apocalypse... and create a future we would like
When speaking about this kind of "issues" one needs to sit and think on the worse use-case possible.
We're not talking -just- about AI overcoming and ruling the world, making us all slaves (understanding that we aren't now) as the thing to fear, this would take "too long".
The things that are over the table (aside from the ones in the open letter) on a shorter timespan are:
Deep fakes in real time
in general, identity theft to the maximum expression:
Propaganda and information
One in control -if there's even such thing after passing a certain point- of AI could well use it to filter content and funnel propaganda to all users, like a Black Mirror chapter, but IRL.
It's simple, If you can ban certain content from the AI that contains references to dicks, you can also ban content that references to critical thinking or any other "concept" or "idea".
This can be specially harmful now that tones of people act on politics as if it was a religion (e.g. adding an "idea" to the "pack" of a given political side should be quite easy in this situation, sociologically/psychologically speaking).
Other
You can add here whatever concern you have, from training AIs to hack companies or individuals's systems from a different location not attached to the hacker, to representing fake videos as if it were real to feed the fuel on instable countries, anything in between and everything beyond.
Given that we already live in a society that's highly polarized, I share your concerns here with how these technologies will be used for influence.
With Mr. Pope with Drip where everyone was super fooled, what's going to stop people from creating realistic AI porn from your LinkedIn picture and ask you to pay them in bitcoins to remove them?
I don't even know if pausing development is the actual solution and 6 months is not enough when laws can take yeaaaars to be approved due to well, shenanigans.
Blagh, I honestly just don't know anymore, I'll be here eating popcorn while seeing people become even more divided over everything.
grumpy panda
I stan Pope Drip I, long may he rule
To drip or not to drip should be the question.
If AI was used to make everyone a character in John Wick I'd be much less grumpy 😂
I think the big question is: where on the curve of AI are we? Is this the beginning, or already close to the end of what is possible. Nobody can really say. Just throwing more data into statistical models has a limit. The point of diminishing returns is probably reached. But we simply don't know if somewhere in a garage some guys are building for AI what Google for the web was.
If we keep following this path - where will GPT get new data from? People will just turn to it instead of sharing, discussing, and discovering new things all around the internet - resulting in no new content for the machine. The whole thing becomes a self-reinforcing echo chamber that endlessly regurgitates and remixes existing knowledge - and all in the hands of a select few organisations... that is some seriously frightening power for them to have.
Another interesting read from Twitter:
Unfortunately, we're nowhere near the end and new training data is no longer the bottleneck. Neither of those reassurances stand.
AI fanatics are currently working on multimodal systems (combine image processing and generation with text processing and generation, eventually other modes, too), and cyclic systems (LLM output drives actions, results are processed by LLM, repeat). Google has already demonstrated a closed loop system using cameras and robotic arms. OpenAI is actively attempting to make GPT successfully make money off of itself, in coordination with copies of itself, given a starter budget and AWS credentials.
So, basically, we're less than a few years away from these things doing their own novel research. Scientific discoveries will be known to AI before they are known to humans.
Yeah, the at which things are pushed ahead right now is really scary. Who thought plugins would be a good idea to be integrated in ChatGPT? Gated AI anyone? Also, Auto-GPT did open another scary door. And those are only the things we can see. I'm getting more and more nervous about all those developments and start to think a global pause would be necessary. But whom am I kidding? The genie is out of the bottle. Or to use another analogy: The flood gates are open, unfortunately the canals in the valley or not even planned yet...
You're assuming that remixing knowledge can't produce new knowledge. A tremendous percentage of research is filtering existing data to attempt to find new insights, and those insights become new data to factor in to additional research.You're assuming that there is something fundamentally unique to humans that would provide novel content, but that's not true. Michelangelo (purportedly) said “The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.”Fundamentally, advancements in Mathematics are the same; just try everything and see if you can find a pattern you can't disprove.
Honestly, the rise of AI may be the end of internet and the return of simple, real, honest human interaction.
Dealing with bots has been a pita for years, now bots are becoming indistinguishable from real people. And therefore, we'll reach the point where everyone and everything is assumed to be a bot. That's our wake-up call to disconnect from it.
I've dedicate my whole life to build the internet. And I gotta admit, I'm happy to see it die before I do.
In my opinion, based on current circumstances, it has to be paused immediately. We have seen the growth of AI rapidly, even some people and news labeled it as "an AI Arms Race" among leading tech companies around the world. However, it seems the majority of the development is more focused on their business instead knowledge contribution. It could disrupt our socioeconomic.
Otherwise, it is much better to have openness behind their magic, it will lead other computer scientists to contribute and make some improvements. Although it will also provoke "evil" to use it, at least "good" people would counter it to make it balance. We have been experiencing how powerful the open-source system of Linux has led us to be here.
Another thing that I most feared is how AI is used as war crime weapons. Historically, the lack of regulation leads the atomic bomb in World War II. So then with AI, we should prevent similar human tragedies in the future. Regarding the outcomes of AI, regulation should be available and agreed upon globally to provide boundaries on several occasions.