- Ethereum co-founder Vitalik Buterin has warned against rushing toward superintelligent AI.
- The Ethereum co-founder highlighted several risks associated with failing to take a cautious approach to developing the technology.
- The warnings come as recent happenings at OpenAI have cast doubt on the safety consciousness of AI firms.
Over the past year, artificial intelligence has made swift and remarkable progress. However, the question of whether the current rate of advancement is something to be celebrated has divided opinions.
While some welcome the recent progress and the current pace of advancement in light of the technology’s potential, others have called for a more cautious and thoughtful approach in light of the risks.
Sponsored
Among the latter is Ethereum co-founder Vitalik Buterin. In a series of recent X posts, the blockchain developer has stressed the risks of allowing the industry to grow too rapidly.
Vitalik Kicks Against Accelerated AI Development
“Superintelligent AI is very risky and we should not rush into it,” Buterin asserted in an X post on Tuesday, May 21, expressing concern over efforts to accelerate AI development.
“we should push against people who try. No $7T server farms plz,” the Ethereum co-founder added in direct response to OpenAI CEO Sam Altman’s plans to raise $7 trillion to accelerate the rate of chip production.
Buterin expressed these views as he worried about the risks of significantly concentrating more power in the hands of a few corporations and governments to the detriment of individuals. Issues highlighted by the Ethereum co-founder included undue government surveillance and weaponization.
"I really worry that the net effect of things like this is lots of human disempowerment, and suffering at the hands of unchallengeable powerful actors," he wrote.
To counter these potential outcomes, the Ethereum co-founder suggested focusing on models with lower barriers to entry. At the same time, he also called for greater regulation of larger AI models.
Buterin’s comments come as recent happenings at OpenAI have raised questions about whether leaders in the field are paying enough attention to the safety of the technology they are developing.
OpenAI Throwing Caution to the Wind?
Over the past week, OpenAI has disbanded its Superalignment team. The team was established in July 2023 to ensure that future AI systems, which could boast more intelligence than all humans combined, could be safely controlled. The move followed the resignations of the team leads, Ilya Sutskever, the firm’s former chief scientist, and Jan Leike, a long-time researcher.
After resigning on Friday, May 17, Leike whipped up a storm by stating in an X post that safety had “taken a backseat” at the firm.
“over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote, stressing that the Superalignment team had not received the support it needed in a long time.
In July 2023, OpenAI publicly committed to allocating 20% of its computing resources to the Superalignment team. However, a recent Fortune report citing sources close to the matter suggests that this commitment appears to have been nothing but lip service.
On the Flipside
- Despite Buterin’s concerns about the accelerated development of AI, the developer has expressed optimism about how AI can be used to vet blockchain code.
Why This Matters
The rapid growth of AI technology over the past year has sparked an equal share of dread and excitement among observers as the technology promises to shape the course of human evolution.
Read this for more on Buterin’s thoughts on AI:
AI Can Tackle Ethereum’s #1 Technical Risk: Vitalik Buterin
Learn more about the new Ethereum interoperability standard proposed by Uniswap and Across Protocol:
What Is ERC-7683, the Cross-Chain Trade Execution Standard?