AI is everywhere, popping up in headlines and conversations like never before. But some experts are sounding alarms, suggesting we need safety checks for Artificial Super Intelligences (ASI) that are as serious as the tests done before the first nuclear bomb was detonated. This might sound wild, but the comparison is catching attention for a reason.
Max Tegmark, a physics professor and AI researcher at MIT, and his students published a paper proposing that companies developing current AI should calculate the chance that their creations could slip out of human control. They want a test similar to the one Arthur Compton performed before the Trinity nuclear test, which estimated the risk of an accidental atmospheric explosion to be less than one in three million.
Tegmark’s calculations suggest a 90% chance that a highly advanced AI could threaten humanity. That’s a huge number compared to the nuclear test’s tiny risk. This theoretical AI, called Artificial Super Intelligence, could be way beyond what we see today with tools like ChatGPT.
He argues that companies can’t just say they “feel good” about their AI’s safety. They need to put a number on the risk, a so-called “Compton constant,” to show how likely control could be lost. This would pressure firms to take safety seriously and work together on standards.
Tegmark isn’t new to pushing for AI safety. He co-founded the Future of Life Institute, which in 2023 called for a pause on robust AI development, gaining signatures from big names like Elon Musk and Steve Wozniak. He’s also worked with top researchers from OpenAI, Google, and DeepMind on global AI safety priorities.
While calculating an AI’s risk like a nuclear bomb test might sound extreme, it highlights how seriously some experts are taking the potential dangers of superintelligent AI. If we ever release an ASI, we might know the exact odds of it going rogue.
What do you think about this approach? Should AI companies be forced to run these safety calculations before releasing new tech? Drop your thoughts in the comments below!