{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

        
User account menu
Main navigation
  • Topics
    • Customer Care
    • Regulated Industries
    • Research & Tech
    • Quality Improvement Tools
    • People Management
    • Metrology
    • Manufacturing
    • Roadshow
    • QMS & Standards
    • Statistical Methods
    • Resource Management
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • Customer Care
    • Regulated Industries
    • Research & Tech
    • Quality Improvement Tools
    • People Management
    • Metrology
    • Manufacturing
    • Roadshow
    • QMS & Standards
    • Statistical Methods
    • Supply Chain
    • Resource Management
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Training

What’s the Price Tag for Preventing an AI Apocalypse?

Like all numbers associated with AI, it’s really, really big

imagedepotpro / iStock

Dave Gilson
Bio

Stanford Graduate School of Business

Wed, 02/18/2026 - 12:01
  • Comment
  • RSS

Social Sharing block

  • Print
Body

Creating a superhuman artificial intelligence could lead to two worst-case scenarios, says Charles Jones, a professor of economics at Stanford Graduate School of Business. In the first, the power to kill everyone—in, say, the form of an AI-engineered supervirus—could fall into the wrong hands. In the second, AI could turn out like a superintelligent alien that—perhaps with no malice—wipes out its puny hosts.

ADVERTISEMENT

AI experts agree that we should start planning how to avert these existential risks before it’s too late. That requires diverting money from the AI race to spend on safety research. But just how much?

Jones has run the numbers, and like a lot of numbers associated with AI, they’re really big.

“I can’t tell you exactly what the right number is, but the right number is way bigger than anything we’re spending now,” he says. His modeling suggests that optimal spending on AI risk mitigation should be at least 1% of the United States’ current GDP—more than $310 billion a year. And it could be much higher—more than 20% of GDP in some of the scenarios he explores.

“Those numbers are really shockingly large,” says Jones. Yet the logic behind them is straightforward: Human life is valuable, so protecting it from catastrophic risk can justify large expenditures. “We don’t quite know what the risk is,” Jones says, “but the fact that it’s there and not zero, and because life is so valuable, we really do want to take action.”

Jones isn’t an AI “doomer” who thinks existential AI risk is all but inevitable. While he finds some of the arguments put forth by pessimists such as the authors of If Anyone Builds It, Everyone Dies (Little, Brown, and Co., 2025) interesting, he says that it’s not necessary to think we’re prompting the apocalypse to take AI risk seriously. “You don’t have to believe the probability is 90% before you want to do something,” Jones says. “Even if the probability is 1%, we’re willing to take actions that are economically large and meaningful.”

The cost of living

Calculating the cost of containing AI starts with putting a price tag on a human life. While that may seem morbid, it’s a routine part of cost-benefit analysis. The federal government generally uses $10 million as the value of a statistical life when setting health, safety, and environmental regulations.

“How much would we as a society pay to avoid a 1% chance of one person dying?” Jones asks. “Well, if you value that life at $10 million, you’d pay 1% of $10 million, which is $100,000.” By comparison, current per capita GDP in the U.S. is around $86,000.

‘You don’t have to believe the probability is 90% before you want to do something. Even if the probability is 1%, we’re willing to take actions that are economically large and meaningful.’
—Charles Jones

Jones cites Covid as an example of large-scale risk mitigation in action. At the height of the pandemic, he estimates the United States lost around 4% of total GDP to lockdowns and other public health measures meant to slow the spread of the virus. Whether or not we spent the right amount, Jones says the takeaway is that “it’s very easy to justify spending fairly large amounts to avoid outcomes that kill people.” We sacrificed a significant chunk of our economy to stop a disease with a mortality rate of about 0.3%, which suggests we’d be willing to spend a lot more to prevent a disaster that could kill everyone.

Pinning down an exact figure isn’t possible, given the many unknowns. Just how much existential risk do we face from AI? How soon could this risk be realized? How effective would our mitigation efforts be? When Jones ran 10 million simulations with different values for these and other parameters, optimal spending on AI risk mitigation was slightly more than 8% of GDP. In about one-third of scenarios, however, the cost was zero, meaning that the risk is negligible—or that our efforts would be ineffective.

Whatever the numbers, the next big question is how to make robust AI risk mitigation a reality. Although AI executives and researchers have expressed concern that they’re moving too fast, they have powerful incentives not to pump the brakes. “You can understand this as a classic prisoner’s dilemma,” Jones says, referring to the game theory scenario in which acting in self-interest is the rational choice yet leads to a collectively worse outcome. “Each AI lab says, ‘Look, I could race or I could slow down. Even if I slow down, whatever happens is going to happen. But if I continue to race, well, maybe I’m safer than the other people. So I should be part of the race.’”

Though Jones doesn’t fully describe the policies that could cool the AI arms race, he has some ideas. A tax on GPUs could fund safety research. Drawing on another familiar risk scenario, he sketches a picture in which advanced AI is controlled like nuclear weapons, kept in check by international agreements and institutions. During the Cold War, Jones says, “we managed not to push the red button.”

Yet he acknowledges that containing AI might prove even more daunting than preventing nuclear war. “If eight billion people had access to the red button, can you ensure nobody pushes the red button?”

Published Jan. 23, 2026, by Stanford Graduate School of Business.

Add new comment

The content of this field is kept private and will not be shown publicly.
About text formats
Image CAPTCHA
Enter the characters shown in the image.

© 2026 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us