imagedepotpro / iStock
Creating a superhuman artificial intelligence could lead to two worst-case scenarios, says Charles Jones, a professor of economics at Stanford Graduate School of Business. In the first, the power to kill everyone—in, say, the form of an AI-engineered supervirus—could fall into the wrong hands. In the second, AI could turn out like a superintelligent alien that—perhaps with no malice—wipes out its puny hosts.
|
ADVERTISEMENT |
AI experts agree that we should start planning how to avert these existential risks before it’s too late. That requires diverting money from the AI race to spend on safety research. But just how much?
Jones has run the numbers, and like a lot of numbers associated with AI, they’re really big.
…

Add new comment