Calculating The Risk Of ASI Starts With Human Minds
On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to AGI culminates in loss of control of Earth, is >90%. 1/10: In our new paper, we develop scaling laws for scalable oversight: oversight and deception ability predictably scale as a function of LLM intelligence! The resulting conclusion is (or should be) straightforward: optimism is not a policy; quantified risk is.
Tegmark is not a voice in the wild. In 2024, more than 1,000 researchers and CEOs — including Sam Altman, Demis Hassabis and Geoffrey Hinton — signed the one-sentence Safe AI declaration stating that “mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.” Over the past two years, the question of artificial super intelligence has migrated from science fiction to the board agenda. Ironically, many of those who called for the moratorium followed the approach “wash me but don't use water”. They publicly claimed the need to delay further development of AI, while at the same time pouring billions into exactly that. One might be excused for perceiving a misalignment of words and works.
From intuition To numerics
Turning dread into numbers is possible. Philosopher-analyst Joe Carlsmith decomposes the danger into six testable premises in his report Is Power-Seeking AI an Existential Risk? Feed your own probabilities into the model and it delivers a live risk register; Carlsmith’s own guess is “roughly ten per cent” that misaligned systems cause civilizational collapse before 2070. That’s in 45 years…
Corporate labs are starting to internalize such arithmetic. OpenAI’s updated Preparedness Framework defines capability thresholds in biology, cybersecurity and self-improvement; in theory no model that breaches a “High-Risk” line ships until counter-measures push the residual hazard below a documented ceiling.
Numbers matter because AI capabilities are already outrunning human gut feel. A peer-reviewed study covered by TIME shows today’s best language models outperforming PhD virologists at troubleshooting wet-lab protocols, doubling the promise of rapid vaccine discovery and the peril of DIY bioweapons.
Opportunity Cost: The Other Half Of The Equation
Risk, however, is only half the ledger. A December 2024 Nature editorial argues that achieving Artificial General Intelligence safely will require joint academic-industry oversight, not paralysis. The upside — decarbonisation breakthroughs, personalised education, drug pipelines measured in days rather than decades — is too vast to abandon.
Research into how to reap that upside without Russian-roulette odds is accelerating:
Constitutional AI. Anthropic’s paper Constitutional AI: Harmlessness from AI Feedback shows how large models can self-criticise against a transparent rule-set, reducing toxic outputs without heavy human labelling. Yet at the same time, their own research shows that their model, Claude, is actively deceiving users.
Cooperative AI. The Cooperative AI Foundation now funds benchmarks that reward agents for collaboration by default, shifting incentives from zero-sum to win-win.
The challenge is that these approaches are exceptional. Overall, the majority of models mirror the standard that rules human society. Still, these strands of research converge on a radical design target: ProSocial ASI — systems whose organising principle is altruistic value creation.
An Analogue Core Beneath The Digital Shell
Here lies the interesting insight: even a super-intelligence will mirror the mindset of its makers. Aspirations shape algorithms. Build under a paradigm of competition and short-term profit, and you risk spawning a digital Machiavelli.
Built under a paradigm of cooperation and long-term stewardship, the same transformer stack can become a planetary ally. Individual aspirations are, therefore, the analogue counterpart of machine intentions. The most important “AI hardware” remains the synaptic network inside every developer’s skull.
Beyond Calculation To Cultivated Compassion
Risk assessment must flow seamlessly into risk reduction and into value alignment. Think of the journey in four integrated moves, more narrative than a technological checklist:
- Diagnose probability. Before the first parameter is trained, run a pre-mortem: map Carlsmith’s six premises onto your domain and estimate Tegmark’s escape odds. Update the figure with every dataset, every architecture tweak.
- Model severity and exposure together. Borrow OpenAI’s threat taxonomy to quantify biological, cyber and autonomy vectors. Publish the numbers — especially the uncomfortable ones — and invite external red-teamers to poke holes.
- Bake mitigation into incentives. Embed refusal-training, continuous auditing and a hardware-level kill-switch in the product timeline, not as an afterthought. Make cooperative-performance metrics part of promotion criteria.
- Elevate human agency. Pair every sprint in code with a sprint in conscience: workshops on algorithmic literacy, bias reflexes and the socio-emotional muscles that turn raw aspiration into altruistic intention.
Notice how each move binds the digital to the analogue. Governance paperwork without culture change is theatre; culture change without quantitative checkpoints is wishful thinking.
A Practical Codex: The A · S · I Rule For Building Benevolent ASI
Three moves — align, scrutinize, incentivize — distill intuition into insight, and panic into preparation.
A – Align purpose
Alignment is literally the “A” in Artificial Super-Intelligence: without an explicit moral compass, raw capability magnifies whatever incentives it finds.
What it looks like in practice : Draft a concise, public constitution that states the prosocial goals and red lines of the system. Bake it into training objectives and evals.
S – Scrutinize & share metrics
Transparency lets outsiders audit whether the “S” (super-intelligence) remains safe, turning trust into verifiable science.
What it looks like in practice : Measure what matters—capability thresholds, residual risk, cooperation scores—and publish the numbers with every release.
I – Incentivize cooperation
Proper incentives ensure the “I” (intelligence) scales collective flourishing rather than zero-sum dominance.
What it looks like in practice : Reward collaboration and teach humility inside the dev team; tie bonuses, citations, and promotions to cooperative benchmarks, not just raw performance.
This full ASI contingency workflow fits onto a single coffee mug. It may flip ASI from an existential dice-roll into a cooperative engine and remind us that the intelligence that people and planet need now more than ever is, at its core, no-tech and analogue: clear purpose, shared evidence, and ethical culture. Silicon merely amplifies the human mindset we embed in it.
Further And Beyond
The Compton constant turns existential anxiety into a number on a whiteboard. But numbers alone will not save us. Whether ASI learns to cure disease or cultivate disinformation depends less on its gradients than on our goals. Design for narrow advantage and we may well get the dystopias we fear. Design for shared flourishing — guided by transparent equations and an analogue conscience — and super-intelligence can become our partner on a journey that takes us to a space where people and planet flourish.
In the end, the future of AI is not about machines outgrowing humanity; it is about humanity growing into the values we want machines to scale. Measured rigorously, aligned early and governed by the best in us, ASI can help humans thrive. The blueprint is already in our hands — and, more importantly, in our minds and hearts.
The above is the detailed content of Calculating The Risk Of ASI Starts With Human Minds. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











While working on Agentic AI, developers often find themselves navigating the trade-offs between speed, flexibility, and resource efficiency. I have been exploring the Agentic AI framework and came across Agno (earlier it was Phi-

SQL's ALTER TABLE Statement: Dynamically Adding Columns to Your Database In data management, SQL's adaptability is crucial. Need to adjust your database structure on the fly? The ALTER TABLE statement is your solution. This guide details adding colu

The release includes three distinct models, GPT-4.1, GPT-4.1 mini and GPT-4.1 nano, signaling a move toward task-specific optimizations within the large language model landscape. These models are not immediately replacing user-facing interfaces like

Troubled Benchmarks: A Llama Case Study In early April 2025, Meta unveiled its Llama 4 suite of models, boasting impressive performance metrics that positioned them favorably against competitors like GPT-4o and Claude 3.5 Sonnet. Central to the launc

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Can a video game ease anxiety, build focus, or support a child with ADHD? As healthcare challenges surge globally — especially among youth — innovators are turning to an unlikely tool: video games. Now one of the world’s largest entertainment indus

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula

Gemini as the Foundation of Google’s AI Strategy Gemini is the cornerstone of Google’s AI agent strategy, leveraging its advanced multimodal capabilities to process and generate responses across text, images, audio, video and code. Developed by DeepM
