Mission Statement

Pause frontier AI experiments

We believe the current rate of AI progress is rapid, dangerous, and largely unregulated. We are on track to reach smarter-than-human intelligence well before we are capable of understanding or directing it. If this continues, AI will likely lead to total extinction of the human species. Let’s pause.

We call for an indefinite pause on colossal AI experiments. By “colossal AI experiments”, we mean any AI training runs that go beyond the current state-of-the-art (GPT-4, as released in March of 2023).

Many experts believe state-of-the-art AI systems will become superhumanly intelligent: capable of outperforming humans at nearly everything. We believe we should not proceed until we are confident that we can understand and control it. 

We should not build colossal AI systems until we understand what we are doing. 

We should not even approach the boundary of colossal AI systems until we are confident that we know where the boundary is, and what we’re going to do when we get there. 

AI experts do not have an understanding of how state-of-the-art AI systems work, how capabilities emerge, how objectives are formed, or how long it will be before AI can overpower humanity. They do not have things under control.

The stakes are high. These systems pose an existential threat to humanity. Experts have documented various ways that AI could overpower humanity, as well as numerous unsolved technical challenges that need to be addressed.

Awareness about these risks is growing. 

  • AI experts: About half of AI experts report that AI has at least a 10% chance of causing “human extinction or similarly permanent and severe disempowerment of the human species.”
  • Public: Polls suggest that about half of the US public is concerned about the possibility that AI could pose a “threat to the existence of the human race”. 
  • Media: Recent articles and interviews in the NYT, FOX, TIME, NBC, Vox, the Financial Times, and other outlets are raising awareness about these risks. 

We are not ready. 

We should pause colossal AI experiments until we understand what we are doing and we have confidence that we can safely proceed.

What should happen during this pause?

  1. Training run restrictions: No one should be permitted to train AI systems more powerful than OpenAI’s GPT-4. During the pause, humanity may still reap the rewards of existing AI (a so-called “AI Summer Harvest”). 
  1. Technical research: Society should invest more into technical research aimed at reducing risks from advanced AI. Examples include research on interpretability, agency, threat models, and novel directions. 
  1. Governance: Governments should develop and implement policies to mitigate risks from colossal AI experiments. An example would be an international agreement similar to the treaty on the non-proliferation of nuclear weapons enforced with compute monitoring

Is it possible that the pause could backfire?

  • Yes. We have not heard any ambitious proposals that have a 0% chance of backfiring.
  • A poorly-implemented pause could lead to an even more dangerous AI race later on. For example, a pause is less likely to work if it only applies to one country or one lab.
  • It is not clear when we should lift the pause. We need to make a lot of progress on understanding and directing these systems. If we do, we might be able to develop specific criteria to decide when (and how) a pause could be lifted.
  • We support the pause not because we think it is perfect, nor because we think it is guaranteed to work. We support it because we think it is one of the best options we currently have, and we think it is significantly better than the status quo. 

What can you do to help?

  • You can sign the letter here.
  • You can learn more about risks from advanced AI via these resources
  • You can speak to your friends, family, and colleagues (see this post and this post).
  • You can contact us if you are interested in getting involved.