Professor Geoffrey Hinton has just won a Nobel prize for his work in AI. Hinton believes there's a large chance AI will kill us all. He is "kinda 50/50" on the likelihood of AI taking over. And he's far from alone in his worries. The top 3 most cited AI researchers (Hinton, Yoshua Bengio and Ilya Sutskever) are all warning that AI could lead to human extinction. The calls for international regulation could not be louder. But it's still perfectly legal to scale up increasingly powerful AI models. Dear politicians, stop this madness. Prevent AI labs from building a digital brain that can outsmart humans.
PauseAI’s Post
More Relevant Posts
-
Today's NYT reported on a study of changes in the use of adjectives in peer reviews of scientific papers about AI. My takeaway: AI will match human intelligence, not because the machines catch up, but because the humans dumb themselves down. If AI needs to be defeated, we don't need to fight the power of our enemy as much as we have to resist our own sloth. The chart below is from that article, which compiles a series of charts from the original study, which can be found here: https://lnkd.in/gxsrsWjR P.S. I did not take LinkedIn's suggestion of writing my post with AI. And in light of this research, it seems to me that "service" is a recipe for homogenization and stupidification. (That last word is there so you know this isn't the AI talking...)
To view or add a comment, sign in
-
What if the goal of AI isn’t to keep us glued to screens, but to enhance our lives? Join me for a compelling discussion with Professor Stuart Russell from University of California, Berkeley, where we tackle this pivotal question. We delve into the ethical dilemmas facing AI researchers and the broader societal implications of their work. Professor Russell challenges the current AI paradigm, advocating for a significant shift toward responsible and beneficial AI development. It’s a conversation that leaves you questioning and reevaluating the true purpose of artificial intelligence. Click the link in the comments to listen in on these viewpoints that are currently reshaping AI’s future. #AIRegulation #AISafety #AIStandard
To view or add a comment, sign in
-
“AI will match human intelligence, not because the machines catch up, but because the humans dumb themselves down.” That sounds distressingly accurate. For hundreds of years, education has largely been a measurement of how much you know. The more you could memorize, the higher your grade, the more intelligent you were assumed to be. What happens when memorization becomes a useless skill because detailed, updated, and cross-referenced knowledge is immediately available to everyone at anytime? This is the AI era that we are embarking upon. Emphasis must shift from knowledge assimilation to knowledge creation and application. As a father with 3 children in 3 different universities, I reluctantly observe that higher education remains based upon the premise that the student who can remember 9/10 items on a list is superior to the one who can recall 7; extra marks if you can replicate the professor’s exact opinion on whatever topic was covered. I can only hope that we adapt quickly enough to educate future generations on creativity and novel ways to apply these vast reservoirs of knowledge. Inventions and applications, not encyclopedic recitations. If we measure “intelligence” the way we always have, the machines are already smarter.
Today's NYT reported on a study of changes in the use of adjectives in peer reviews of scientific papers about AI. My takeaway: AI will match human intelligence, not because the machines catch up, but because the humans dumb themselves down. If AI needs to be defeated, we don't need to fight the power of our enemy as much as we have to resist our own sloth. The chart below is from that article, which compiles a series of charts from the original study, which can be found here: https://lnkd.in/gxsrsWjR P.S. I did not take LinkedIn's suggestion of writing my post with AI. And in light of this research, it seems to me that "service" is a recipe for homogenization and stupidification. (That last word is there so you know this isn't the AI talking...)
To view or add a comment, sign in
-
This report by Nathan Benaich and Air Street Capital delves into the significant aspects of AI, covering research, industry, politics, safety, and ethics. It also offers insightful predictions for the upcoming year. Check out the details here: https://lnkd.in/gJfRE6bR. #AI #GenAI #AIinIndustry
To view or add a comment, sign in
-
An interesting read on two sides of the AI argument. On the one hand, we have people like the British-Canadian computer scientist Geoffrey Hinton, predicting that AI models might “evolve” in dangerous ways, suggesting there’s a 10% chance “these things will wipe out humanity in the next 20 years.” Then, we have researchers who argue that generative AI models are nothing more than expensive statistical tricks and that existential risks from the technology are “science fiction fantasy.” Risks should of course be taken seriously, but is a fatalistic perspective too far-fetched? https://lnkd.in/dGTFSKaq #AI GenerativeAI #Disruption
To view or add a comment, sign in
-
In this post, Gwern Branwen, an early observer of LLM scaling, discusses AI progress and influences on the development of AGI, emphasizing the importance of scaling and compute over traditional algorithmic breakthroughs. He reflects on the potential role of human intelligence versus AI and the implications of forthcoming technological advances like weight-loss drugs on human behavior. Branwen also shares insights on his writing process and the broader impacts of AI on creative work. #ai #technology https://lnkd.in/gRBjguHk
Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory
dwarkeshpatel.com
To view or add a comment, sign in
-
It was great to have Fabienne Sandkühler, a co-author of the survey "Thousands of AI Authors on the Future of AI," share with the BuzzRobot community the findings of the survey she and her collaborators conducted in October 2023, interviewing almost 3000 AI researchers. Some key findings: 50% of researchers said that full automation of labor is expected by 2116 (down from 2164 in 2022, a 48-year difference). 70% of respondents believed AI safety research should be prioritized more than currently. The biggest social impact concerns related to AI are deepfakes (spreading misinformation), manipulation of public opinion using AI, and economic inequality. Watch the full presentation with insights from the survey on our YouTube channel. #artificialintelligence, #techtalks, #machinelearning, #aipredictions, #deepfakes, #singularity, #fullautomation https://lnkd.in/gbw69qpR
The predictions of top researchers on future advancements of AI (Artificial Intelligence)
https://www.youtube.com/
To view or add a comment, sign in
-
The article titled "Who's afraid of the big, bad AI computer?" delves into the duality of excitement and apprehension surrounding AI technology in the wake of GenAI's rapid advancements. It explores both the transformative potential of AI and the fears it invokes, highlighting the necessity for responsible innovation amidst rising concerns over its implications. Read more here: [WatersTechnology](https://lnkd.in/ggPDdKYb) #AI #ArtificialIntelligence #TechnologyTrends #Innovation #DataScience #GenAI #FutureTech #ResponsibleTech
To view or add a comment, sign in
-
https://lnkd.in/dGn3dH6R Explore the groundbreaking innovations at Sakana AI Labs and their comprehensive AI system designed to revolutionize scientific research. Dive into the heated debate surrounding AI's ability to understand and innovate like humans, addressing crucial concerns about quality and authenticity. Uncover the potential risks posed by fake research and the pressing need to safeguard scientific integrity in an era of rapid technological advancement. #SakanaAILabs #AIScientificResearch #AIInnovation #ScientificIntegrity #TechRevolution
AI Scientist Writes Papers Should We Be 2024#SakanaAILabs #AIScientificResearch#ScientificIntegrity
https://www.youtube.com/
To view or add a comment, sign in
-
"Let's envision a vibrant symphony of knowledge, where engineers and philosophers harmonize, data scientists and anthropologists collaborate, and ethicists and artists create a chorus of wisdom that guides us towards a responsible and equitable AI future." ~Soribel Feliz, current STPF fellow. In the newest Sci on the Fly blog post, Soribel talks about why all the scientists and scholars - "not just the Einsteins and Teslas" - should be brought to the table when addressing the complex questions around artificial intelligence. Read it here: https://bit.ly/49MUloG #scipol #artificialintelligence #ai
Trust the Scientists (All of Them): AI Impacts everyone and everyone should impact AI policy
aaaspolicyfellowships.org
To view or add a comment, sign in
676 followers