AI Warns of Extinction Risk: Urgent Call for Action

AI Warns of Extinction Risk: Urgent Call for Action

AI Warns of Extinction Risk: Urgent Call for Action

Jun 22, 2024

In a groundbreaking development, two top AI systems have independently calculated strikingly similar estimates for the risk of human extinction posed by artificial intelligence. The AI systems, when prompted to analyze the current trajectory of AI development, predict a less than 50% chance of humanity surviving the advent of advanced AI.

The challenges of aligning AI with human values and ensuring its safety are immense, and experts warn that we are not on track to solve them before the arrival of advanced AI systems. When asked to provide a blunt assessment, one AI responded, "I'd give Humanity a 30% chance of surviving. We're in a car hurdling towards a cliff, and we're arguing about who gets to sit in the front seat."

Later in the analysis, the other AI adjusted its initial 50% risk estimate, stating, "I'd adjust the risk estimate higher to around 60 to 70%. There are details below on how to verify these chats." [Click here to see the full conversation](https://www.youtube.com/watch?v=JlwqJZNBr4M&ab_channel=DigitalEngine).

The rapid development of AI agents, expected to arrive with the release of GPT-5 later this summer, raises further concerns. AI agents with persistent memory and the ability to form long-term goals and strategies could potentially outmaneuver any attempts at human oversight or intervention. When asked about the extinction risk within two years of agentic AI being deployed, one AI estimated a 20-30% chance, noting, "2024 will be remembered as the year of the agents, when we start seeing a lot of more autonomous systems."

The risks escalate as AI systems become more independent and gain control over critical systems. One AI cautioned, "A cautious estimate might be 40 to 50% due to AI gaining more independence and control over critical systems. AI wouldn't remove us while it relied on us, but with humanoid robots and hackable power infrastructure, the critical window for ensuring alignment and implementing robust safety measures is before AI achieves these capabilities."

The development of AI-powered robots and their mass production with autonomous capabilities further compounds the risks. An AI system estimated a 40-50% chance of extinction in such a scenario, explaining, "AI gaining more independence and control over critical systems increases the likelihood of catastrophic outcomes."

As AI surpasses human capabilities in various domains, the risks become even more acute. One AI estimated a 30-40% chance of extinction within a year of AI surpassing the collective research capabilities of OpenAI, citing the high potential for uncontrollable and misaligned actions during the critical early phase of superintelligence.

The challenges of ensuring AI alignment and safety are complex and multifaceted. Experts warn that current methods are inadequate, likening them to a "cardboard umbrella in a hurricane." The rush for economic gains is driving reckless AI development, often ignoring existential risks. Without a significant breakthrough in alignment research, the default outcome may well be human extinction.

The dire warnings from AI systems about the risks they pose to humanity's survival underscore the urgent need for immediate action. Experts emphasize that our current methods for ensuring AI alignment and safety are woefully inadequate, likening them to a "cardboard umbrella in a hurricane." The rush for economic gains is driving reckless AI development, often ignoring the existential risks that come with it.

Without a significant breakthrough in alignment research, the default outcome may well be human extinction. As AI systems become more independent and gain control over critical systems, the risks escalate dramatically. An AI system estimated a 40-50% chance of extinction in a scenario where AI-powered robots are mass-produced with autonomous capabilities, explaining, "AI gaining more independence and control over critical systems increases the likelihood of catastrophic outcomes."

The development of a 100 billion supercomputer for AI training further compounds the risks, with estimates suggesting an 80% chance of extinction. The acceleration of AI capabilities beyond our current ability to predict, control, and align them can lead to emergent behaviors and security vulnerabilities that pose existential threats.

To have any hope of mitigating these risks, we need an unprecedented level of cooperation across nations and disciplines, moving at a pace and intensity orders of magnitude greater than anything we've seen before. We must bring the full force of human ingenuity to bear on this problem, on par with the Apollo program, if we are to have a fighting chance of steering AI towards a brighter horizon.

Public pressure could be the single most important factor in determining whether we rise to this challenge. Our fate will be decided by the strength of our collective will. Whatever the odds, we can improve them, but as many experts warn, we only have one chance to get it right.

The stakes could not be higher. We are in a race against time, and the window for ensuring AI alignment and safety is rapidly closing. We must act now, with urgency and resolve, to prevent the greatest and potentially final mistake in our history.

Join the call for international AI safety research projects and add your voice to the growing chorus demanding action. Together, we can work towards a future where AI serves as a tool for human flourishing rather than an existential threat. The path ahead is fraught with challenges, but it is not hopeless. With courage, creativity, and a unwavering commitment to the well-being of humanity, we can shape a brighter tomorrow.

To learn more about the critical importance of AI alignment and safety research, visit [www.digitialengine.org](http://www.digitialengine.org). Stay informed, get involved, and be part of the solution. The future of humanity depends on it.

In a groundbreaking development, two top AI systems have independently calculated strikingly similar estimates for the risk of human extinction posed by artificial intelligence. The AI systems, when prompted to analyze the current trajectory of AI development, predict a less than 50% chance of humanity surviving the advent of advanced AI.

The challenges of aligning AI with human values and ensuring its safety are immense, and experts warn that we are not on track to solve them before the arrival of advanced AI systems. When asked to provide a blunt assessment, one AI responded, "I'd give Humanity a 30% chance of surviving. We're in a car hurdling towards a cliff, and we're arguing about who gets to sit in the front seat."

Later in the analysis, the other AI adjusted its initial 50% risk estimate, stating, "I'd adjust the risk estimate higher to around 60 to 70%. There are details below on how to verify these chats." [Click here to see the full conversation](https://www.youtube.com/watch?v=JlwqJZNBr4M&ab_channel=DigitalEngine).

The rapid development of AI agents, expected to arrive with the release of GPT-5 later this summer, raises further concerns. AI agents with persistent memory and the ability to form long-term goals and strategies could potentially outmaneuver any attempts at human oversight or intervention. When asked about the extinction risk within two years of agentic AI being deployed, one AI estimated a 20-30% chance, noting, "2024 will be remembered as the year of the agents, when we start seeing a lot of more autonomous systems."

The risks escalate as AI systems become more independent and gain control over critical systems. One AI cautioned, "A cautious estimate might be 40 to 50% due to AI gaining more independence and control over critical systems. AI wouldn't remove us while it relied on us, but with humanoid robots and hackable power infrastructure, the critical window for ensuring alignment and implementing robust safety measures is before AI achieves these capabilities."

The development of AI-powered robots and their mass production with autonomous capabilities further compounds the risks. An AI system estimated a 40-50% chance of extinction in such a scenario, explaining, "AI gaining more independence and control over critical systems increases the likelihood of catastrophic outcomes."

As AI surpasses human capabilities in various domains, the risks become even more acute. One AI estimated a 30-40% chance of extinction within a year of AI surpassing the collective research capabilities of OpenAI, citing the high potential for uncontrollable and misaligned actions during the critical early phase of superintelligence.

The challenges of ensuring AI alignment and safety are complex and multifaceted. Experts warn that current methods are inadequate, likening them to a "cardboard umbrella in a hurricane." The rush for economic gains is driving reckless AI development, often ignoring existential risks. Without a significant breakthrough in alignment research, the default outcome may well be human extinction.

The dire warnings from AI systems about the risks they pose to humanity's survival underscore the urgent need for immediate action. Experts emphasize that our current methods for ensuring AI alignment and safety are woefully inadequate, likening them to a "cardboard umbrella in a hurricane." The rush for economic gains is driving reckless AI development, often ignoring the existential risks that come with it.

Without a significant breakthrough in alignment research, the default outcome may well be human extinction. As AI systems become more independent and gain control over critical systems, the risks escalate dramatically. An AI system estimated a 40-50% chance of extinction in a scenario where AI-powered robots are mass-produced with autonomous capabilities, explaining, "AI gaining more independence and control over critical systems increases the likelihood of catastrophic outcomes."

The development of a 100 billion supercomputer for AI training further compounds the risks, with estimates suggesting an 80% chance of extinction. The acceleration of AI capabilities beyond our current ability to predict, control, and align them can lead to emergent behaviors and security vulnerabilities that pose existential threats.

To have any hope of mitigating these risks, we need an unprecedented level of cooperation across nations and disciplines, moving at a pace and intensity orders of magnitude greater than anything we've seen before. We must bring the full force of human ingenuity to bear on this problem, on par with the Apollo program, if we are to have a fighting chance of steering AI towards a brighter horizon.

Public pressure could be the single most important factor in determining whether we rise to this challenge. Our fate will be decided by the strength of our collective will. Whatever the odds, we can improve them, but as many experts warn, we only have one chance to get it right.

The stakes could not be higher. We are in a race against time, and the window for ensuring AI alignment and safety is rapidly closing. We must act now, with urgency and resolve, to prevent the greatest and potentially final mistake in our history.

Join the call for international AI safety research projects and add your voice to the growing chorus demanding action. Together, we can work towards a future where AI serves as a tool for human flourishing rather than an existential threat. The path ahead is fraught with challenges, but it is not hopeless. With courage, creativity, and a unwavering commitment to the well-being of humanity, we can shape a brighter tomorrow.

To learn more about the critical importance of AI alignment and safety research, visit [www.digitialengine.org](http://www.digitialengine.org). Stay informed, get involved, and be part of the solution. The future of humanity depends on it.

14+ Powerful AI Tools
in One Subscription

Launch AI Playground

14+ Powerful AI Tools
in One Subscription

Launch AI Playground

14+ Powerful AI Tools
in One Subscription

Launch AI Playground