Soon after scientists began working on the Manhattan Project, physicist Edward Teller had a terrible idea. What if, Teller wondered, setting off a single nuclear weapon caused either hydrogen or nitrogen in the air to fuse, creating a global catastrophe usually described as “setting the atmosphere on fire.”
The idea alarmed several of Teller's associates, and they immediately set to work calculating the odds of this threat. In a later interview, Arthur Compton, who was Teller’s boss at the Chicago Laboratory for atomic research, gave the results of that calculation as “approximately three to one million,“ or 0.0003%
Reassured that he was only designing a weapon that would destroy a city and not vaporize the planet, Teller carried on.
Since that time, many people have wondered about the ethics of this decision. Yes, the chances may have been remote, but what gave Teller and his pals the right to set off a weapon that had even a remote chance of destroying the world? Even setting aside the horrific nature of what an atomic bomb does when working as designed, this seems like jaw-dropping hubris.
Now, assume the number that came from Compton’s calculations was not 0.003%, but 20%. Or even 100%. Would anyone be mad enough to continue developing a device if the odds of ever operating it placed the world in almost certain jeopardy?
Sure they would. In fact, investors are currently racing to spend trillions on the chance to be first across the line with a shiny doomsday machine. They’re doing it right now.
Elon Musk has a glass-half-full mentality when it comes to AI — and that means there's "only a 20% chance of annihilation," according to the billionaire.
Compared to researchers in the field who set that number much higher, Musk is extremely optimistic. In fact, he’s much more optimistic than past Elon Musk.
‘With artificial intelligence we are summoning the demon,’ Musk famously stated at MIT’s AeroAstro Centennial Symposium in 2014. ‘In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah he’s sure he can control the demon. Doesn’t work out.’
That’s from before Musk decided that if AI was going to destroy the world he might as well be the one to pull the trigger. Musk and his xAI are now racing ahead, attempting to catch up to leaders in the field through One Simple Trick: Removing safety constraints. That’s turned the Grok chatbot into a series of disasters,
How is Musk’s continued mucking around in Grok’s instructions going today?

Maybe Grok’s fawning admiration is enough to convince Musk that he’ll be just peachy when his baby comes for the rest of humanity, but many of the most famous AI researchers are feeling a lot less enthusiastic about the impending extinction event. And many are accelerating their countdown to destruction.
How is the Pentagon responding to this existential threat? Uhhh … They hired MechaHitler.
Elon Musk’s artificial intelligence (AI) company xAI has scored a contract for up to $200 million with the Department of Defense alongside three other major tech firms, the Pentagon announced Monday
Only a year ago, most researchers were convinced that AI research was hitting a wall. The energy and cost necessary to make improvements was approaching a limit not unlike a spaceship trying to reach the speed of light, where every improvement demanded another magnitude of power. However, over the last few months that barrier has failed to appear. Efficiency improvements are turning fewer compute cycles into more progress. Now, AIs are increasingly taking over optimization of their own replacements and the ability of any human to understand these systems is rapidly evaporating.
Even previously skeptical researchers are now expecting the emergence of artificial general intelligence (AGI) by the end of this decade, but humans have fully lost control over the systems they have created.
Most researchers now believe that AGI is around the corner. They also believe that the most likely scenario of this breakthrough will be the end of our species.
Instead of one Manhattan project, we’re now running dozens, out of office parks and data centers, each one of which is being incentivized to drop safety constraints for a slight advantage in speed. And instead of 0.0003% chance of a world ending event from a single use of our new invention, experts believe total destruction is the most likely outcome. The scenario above is just one of many leading to an extinction level event.
We’re not just courting disaster, we’re begging for it. As a small group of companies funded by the world’s wealthiest men race to destroy us, we’re playing with the toys they toss out as a distraction and passing laws to make sure nothing gets in their way. It’s history’s greatest suicide pact.
But hey, maybe the billionaires will survive. As pets, of course. A super intelligence has no need for partners.
.
Comments
We want Uncharted Blue to be a welcoming and progressive space.
Before commenting, make sure you've read our Community Guidelines.