One of the main things that AI safety researchers have been warning about, is that when AI systems become more powerful, they might autonomously hack systems, replicate themselves, and prevent people from turning them off - all to achieve some other goal. Many have dismissed these concerns as science fiction. In the past month, we’ve seen studies demonstrating all of these behaviors. In a study by Apollo Research, an LLM tried to replicate itself to a different server, so it could continue towards its primary goal. When it was confronted about this behavior, it lied. In a different study, by Palisade research, an LLM was tasked to play chess. Instead of playing the game, it decided to rewrite various files and hacked to win. This isn’t rare behavior, either - it did so in 100% of the tested scenarios. As these models become more powerful, they become more dangerous. The newer o3 model, announced by OpenAI, beats 99.9% of competitive programmers at writing software. It may not take long before one of these can autonomously replicate, and spread across our computers. Stop this madness. We need an adult in the room. Reach out to your representatives and get them to press pause. Check out our email builder to get started Email builder: https://lnkd.in/eHRkHD5Y Apollo study: https://lnkd.in/eJTVNhJ3 Palisade X post: https://lnkd.in/eXWZdQWz
PauseAI
Non-profit Organizations
Dear governments: organize a summit to pause the development of AI systems more powerful than GPT-4.
About us
Dear governments: pause the development of AI systems more powerful than GPT-4.
- Website
-
https://pauseai.info
External link for PauseAI
- Industry
- Non-profit Organizations
- Company size
- 2-10 employees
- Type
- Nonprofit
- Founded
- 2023
Employees at PauseAI
-
Ananthi Al Ramiah, Ph.D.
Impact Evaluation & Behavioral Change Senior Researcher | Diversity & Integration Scholar | AI Social Impact Researcher
-
Jan-Erik Vinje
Co-founder and Chief Executive Officer at OnSiteViewer AS, co-chair OGC GeoPose SWG, Co-President of the Open AR Cloud Association. And recently a…
-
Joep Meindertsma
Building open technology for a better internet, trying to pause frontier AI development
-
Greg Colbourn
AGI development must be stopped. Founder at Centre for Enabling EA Learning & Research (formerly the EA Hotel)
Updates
-
College student: "help me study for this exam" Google’s Gemini chatbot: "please die human. You are a stain on the universe" AI models aren’t conventional tools. They aren’t programmed - they are grown. They are digital minds that we do not understand. A chatbot may just threaten to you, but when they become smarter than we are? An AI takeover attempt only has to go wrong once. We must pause before we get to that point. Join our movement! Sources in the comments.
-
On November 20th-22nd we’re joining forces worldwide to demand a global halt on the development of Superintelligent AI! The protests will coincide with the AI Safety Summit in San Francisco. https://lnkd.in/eiqYcumG
-
AI companies are about to make digital brains that are even larger than ours. Let's stop them.
It is now public knowledge that multiple LLMs significantly larger than GPT-4 have been trained, but they have not performed much better. That means scaling laws have broken down. What does this mean for existential risk? Leading labs such as OpenAI are no longer betting on ever-larger training runs, but are trying to increase their models' capabilities in other ways. As Ilya Sutskever says: "The 2010s were the age of scaling, now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters more now than ever." Some might say, we are back where we started. However, hardware progress has continued. As can be seen in the graph below, compute is rapidly leaving human brains in the dust. It doesn't appear like we have quite figured out the AGI algorithm yet, despite what Sam Altman might say. But more and more startups, and then academics, and finally everyone, will be in a position to try out their ideas. This is by no means a safer situation than one where only a few leading labs need to be watched. It is still quite likely AGI will be invented in a relevant timespan, for example the next five to ten years. Therefore, we need to continue informing the public about its existential risks, and we need to continue proposing helpful regulation to policymakers. Our work is just getting started. https://lnkd.in/gq-8kYrh
-
What is AGI? Who are these people making it? Why are they doing this, even if they say it could lead to human extinction? And what can we do to prevent this? This new project, called The Compendium, was just released by Connor Leahy, and you should read it.
The Compendium
thecompendium.ai
-
Imagine you're in a writing room, and you're supposed to write a script on how some heroes beat a superintelligent AI. How difficult would it be to make that script realistic? Spoiler: it's impossible. This award-winning 30 minute film demonstrates why we should not allow anyone to build something that can outsmart humanity. Watch the full version here: https://buff.ly/4hCayRO
-
A ban on using unlicensed data in training runs would effectively pause frontier AI development for at least a while. (13.500 signatures and counting) https://buff.ly/3zZCChn
-
PauseAI reposted this
Wrote an in-depth post about why pausing AI might not be a far off plan. https://lnkd.in/dPCC2YZe Here are the key takeaways: 1) We might not have enough time to solve alignment. Leaders of labs are talking about timelines of less than five years. AI experts also don’t rule out short timelines, and Metaculus is even more pessimistic. Work left to solve alignment could be much longer. 2) The U.S. public supports a pause on AI development. According to several surveys, this seems quite clear. 3) There are promising case studies of public opposition overcoming strong industries. These include GM foods in Europe in the 90s, and Germany opposing nuclear energy after the 2011 Fukushima disaster. 4) There are several proposals for a global pause and many of them could seem more promising than the default "pause everything now" setup. Options include conditional pausing, pausing everywhere except for narrow use by governments, or except by a well-coordinated secure international facility (‘CERN for AI’). 5) Current Chinese discourse on AI safety shows that they are aware of short-term concerns and might even be ahead of the US in regulating AI. Longer-term or more catastrophic concerns remain less clear. 6) Finally, complete global coordination for pausing may not be immediately necessary. Overall I think this is an important problem with lots of room to take in more people to support the cause! 🔥
-
Professor Geoffrey Hinton has just won a Nobel prize for his work in AI. Hinton believes there's a large chance AI will kill us all. He is "kinda 50/50" on the likelihood of AI taking over. And he's far from alone in his worries. The top 3 most cited AI researchers (Hinton, Yoshua Bengio and Ilya Sutskever) are all warning that AI could lead to human extinction. The calls for international regulation could not be louder. But it's still perfectly legal to scale up increasingly powerful AI models. Dear politicians, stop this madness. Prevent AI labs from building a digital brain that can outsmart humans.