PauseAI

PauseAI

Non-profit Organizations

Dear governments: organize a summit to pause the development of AI systems more powerful than GPT-4.

About us

Dear governments: pause the development of AI systems more powerful than GPT-4.

Website
https://pauseai.info
Industry
Non-profit Organizations
Company size
2-10 employees
Type
Nonprofit
Founded
2023

Employees at PauseAI

Updates

  • One of the main things that AI safety researchers have been warning about, is that when AI systems become more powerful, they might autonomously hack systems, replicate themselves, and prevent people from turning them off - all to achieve some other goal. Many have dismissed these concerns as science fiction. In the past month, we’ve seen studies demonstrating all of these behaviors. In a study by Apollo Research, an LLM tried to replicate itself to a different server, so it could continue towards its primary goal. When it was confronted about this behavior, it lied. In a different study, by Palisade research, an LLM was tasked to play chess. Instead of playing the game, it decided to rewrite various files and hacked to win. This isn’t rare behavior, either - it did so in 100% of the tested scenarios. As these models become more powerful, they become more dangerous. The newer o3 model, announced by OpenAI, beats 99.9% of competitive programmers at writing software. It may not take long before one of these can autonomously replicate, and spread across our computers. Stop this madness. We need an adult in the room. Reach out to your representatives and get them to press pause. Check out our email builder to get started Email builder: https://lnkd.in/eHRkHD5Y Apollo study: https://lnkd.in/eJTVNhJ3 Palisade X post: https://lnkd.in/eXWZdQWz

    • No alternative text description for this image
  • College student: "help me study for this exam" Google’s Gemini chatbot: "please die human. You are a stain on the universe" AI models aren’t conventional tools. They aren’t programmed - they are grown. They are digital minds that we do not understand. A chatbot may just threaten to you, but when they become smarter than we are? An AI takeover attempt only has to go wrong once. We must pause before we get to that point. Join our movement! Sources in the comments.

    • No alternative text description for this image
  • AI companies are about to make digital brains that are even larger than ours. Let's stop them.

    It is now public knowledge that multiple LLMs significantly larger than GPT-4 have been trained, but they have not performed much better. That means scaling laws have broken down. What does this mean for existential risk? Leading labs such as OpenAI are no longer betting on ever-larger training runs, but are trying to increase their models' capabilities in other ways. As Ilya Sutskever says: "The 2010s were the age of scaling, now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters more now than ever." Some might say, we are back where we started. However, hardware progress has continued. As can be seen in the graph below, compute is rapidly leaving human brains in the dust. It doesn't appear like we have quite figured out the AGI algorithm yet, despite what Sam Altman might say. But more and more startups, and then academics, and finally everyone, will be in a position to try out their ideas. This is by no means a safer situation than one where only a few leading labs need to be watched. It is still quite likely AGI will be invented in a relevant timespan, for example the next five to ten years. Therefore, we need to continue informing the public about its existential risks, and we need to continue proposing helpful regulation to policymakers. Our work is just getting started. https://lnkd.in/gq-8kYrh

    • No alternative text description for this image
  • Imagine you're in a writing room, and you're supposed to write a script on how some heroes beat a superintelligent AI. How difficult would it be to make that script realistic? Spoiler: it's impossible. This award-winning 30 minute film demonstrates why we should not allow anyone to build something that can outsmart humanity. Watch the full version here: https://buff.ly/4hCayRO

  • PauseAI reposted this

    View profile for Richard Annilo, graphic

    Operations | AI Governance research | Effective altruism

    Wrote an in-depth post about why pausing AI might not be a far off plan. https://lnkd.in/dPCC2YZe Here are the key takeaways: 1) We might not have enough time to solve alignment. Leaders of labs are talking about timelines of less than five years. AI experts also don’t rule out short timelines, and Metaculus is even more pessimistic. Work left to solve alignment could be much longer. 2) The U.S. public supports a pause on AI development. According to several surveys, this seems quite clear. 3) There are promising case studies of public opposition overcoming strong industries. These include GM foods in Europe in the 90s, and Germany opposing nuclear energy after the 2011 Fukushima disaster. 4) There are several proposals for a global pause and many of them could seem more promising than the default "pause everything now" setup. Options include conditional pausing, pausing everywhere except for narrow use by governments, or except by a well-coordinated secure international facility (‘CERN for AI’). 5) Current Chinese discourse on AI safety shows that they are aware of short-term concerns and might even be ahead of the US in regulating AI. Longer-term or more catastrophic concerns remain less clear. 6) Finally, complete global coordination for pausing may not be immediately necessary. Overall I think this is an important problem with lots of room to take in more people to support the cause! 🔥

  • Professor Geoffrey Hinton has just won a Nobel prize for his work in AI. Hinton believes there's a large chance AI will kill us all. He is "kinda 50/50" on the likelihood of AI taking over. And he's far from alone in his worries. The top 3 most cited AI researchers (Hinton, Yoshua Bengio and Ilya Sutskever) are all warning that AI could lead to human extinction. The calls for international regulation could not be louder. But it's still perfectly legal to scale up increasingly powerful AI models. Dear politicians, stop this madness. Prevent AI labs from building a digital brain that can outsmart humans.

    • No alternative text description for this image

Similar pages