Statement on Superintelligence posted by the Future of Life Institute

CONTEXT

Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

STATEMENT

We call for a prohibition on the development of superintelligence, not lifted before there is
1. broad scientific consensus that it will be done safely and controllably, and
2. strong public buy-in.

Link to Statement on the Future of Life Institute website

The same Statement on Superintelligence posted by EKO

CONTEXT

Ban superintelligent AI until it’s safe

Reckless tech billionaires are racing to build AI ‘superintelligence’ – raising massive ethical and security concerns.

From controlling nuclear weapons, to replacing millions of jobs, to creating deadly viruses – unfettered AI development could have catastrophic consequences for us all.

And it’s keeping top experts and scientists awake at night. 400 Nobel Laureates, CEOs, faith leaders and public figures have put their names behind a powerful new call to ban this advanced AI until it’s safe: the Statement on Superintelligence.

That’s the foundation. Now it’s up to us to build a people powered campaign so big that our governments have to respond. 

AI experts believe that superintelligence could be less than ten years away, and they warn that we do not know how to control it. That’s why hundreds of leading public figures are calling for AI tools to be developed securely and for those tools to be targeted at solving specific problems in areas like health and education.

Recent polling shows that three?quarters of U.S. adults want strong regulations on AI development, preferring oversight akin to pharmaceuticals rather than the tech industry’s “self?regulation.” And almost two-thirds (64%) feel that superhuman AI should not be developed until it is proven safe and controllable, or should never be developed.

Big Tech lobbyists say that a moratorium could give rogue actors or states an advantage. But that argument underestimates the catastrophic potential for all of humanity as advanced AI is developed – regardless what country it’s ultimately made in, or if we ever really achieve “AI superintelligence”.

Among the 400+ initial signers of the Statement on Superintelligence are retired military leaders and security advisors, journalists and academics, policy-makers, priests and CEOs. Let’s add our names too, and show governments and Big Tech that it’s time to act.

Just in the last month, Amazon’s Jeff Bezos and OpenAI’s Sam Altman have admitted that there’s an AI investment bubble. As the bubble threatens to burst, the pressure on AI companies to cut corners, cover-up mistakes, and ignore warnings, is only going to increase. 

That’s why we need to speak out NOW.

Link to Statement on the EKO website


Statement on AI Risk by the Center for AI Safety

CONTEXT

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

STATEMENT

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Link Statement on CAIS website


Last Updated on October 22, 2025 by Real KBrett