No (AI) news, (no) good news
After my visit to Web Summit a couple of weeks ago, one of the world’s leading technology conferences held in Lisbon, I was surprised by the prominent presence of artificial intelligence in all the discussions. This theme dominated the event’s agenda, discussing its future, training, responsibility, and multiple applications. Upon returning from the conference with the intention of delving deeper into this topic, I encountered a surprise. The universe put before me the most compelling tech drama that could happen at this moment in the tech universe.
Two days after the conference’s closure on November 17th, during an investment round, Sam Altman, founder and CEO of OpenAI, named Forbes’ first under-30 investor in 2015 and the founder of companies such as Airbnb, Reddit, Pinterest and Stripe, was dismissed. This dismissal stirred a commotion on social media. Meanwhile, Microsoft, holding 49% of OpenAI after its $10 billion investment, took advantage of the situation by hiring Altman as the leader of its artificial intelligence division, alongside Greg Brockman, also a founder and former president of OpenAI, who resigned following Altman’s departure. The response from 95% of the company’s employees was a petition demanding Altman’s reinstatement, threatening to resign if it wasn’t enacted.
Rumors and speculations quickly multiplied. OpenAI, a company known for its non-profit interests and prioritization of its original mission to prevent technology from falling into the wrong hands, where its executives couldn’t have economic interests in the new society to avoid influencing decisions. Rumors suggested Altman hadn’t been truthful in his communications with the board, affecting his ability to carry out his responsibilities, or that he might have been negotiating with Microsoft behind the board’s back for a larger stake. It appears that both Altman and Brockman wanted to rapidly advance the deployment of this technology, despite the risks involved. However, the board, with a more philanthropic focus, advocated for a more conservative and less aggressive path based on ethics and security.
This internal struggle of ethical interests revolved around project Q*, a technology with superhuman capabilities for solving mathematical problems known as AGI. The implications of its development could be disastrous for society, leading to differences in how to approach the regulation of artificial Intelligence.
Ilya Sutskever, co-founder and board member, seemingly a supporter of Altman’s dismissal, warned of the risks this could entail. However, hours later, Sutskever apologized and signed the joint statement with the rest of his colleagues for Altman’s return. As if this weren’t enough, Larry Summers, former president of Harvard University and former Secretary of the Treasury during the Clinton administration, who had gone viral in the past for suggesting that women had an innate inferior capacity for science and mathematics, joined the team. Bret Taylor, known for his work at Google in mapping technology and as CTO of Facebook, and the aforementioned colleague of Saltman, Greg Brockman, also joined.
On November 21st, Saltman announced his return to OpenAI as if nothing had happened and everything had returned to normal. The latest news is that OpenAI agreed to acquire artificial intelligence chips from the startup Rain for $51 million. Altman also invested in Rain, which develops neuromorphic chips that mimic the characteristics of the human brain. However, Altman’s situation has raised concerns about other potential conflicts of interest. Additionally, Rain faces challenges, such as the need to sell a stake due to national security concerns, which could delay chip delivery to OpenAI. Altman has also explored the possibility of starting a new chip company to diversify offerings in the artificial intelligence market.
These past two weeks have witnessed a remarkable shift in OpenAI’s direction, moving away from its initial philanthropic and ethical approach to enter a business-oriented path. OpenAI’s proximity to Microsoft has revealed its vulnerability, as mentioned by Alex Kantrowitz, founder of the Big Technology Podcast, in one of his talks at Web Summit about AI predictions.
We’ll be watching the next chapters of this exciting development, hoping that the decisions made by its revamped leadership are the right ones.
Top comments (0)