How alarmed should we really be?

The risk of AGI could be compared to the threat of an inbound asteroid that must be deflected before it hits Earth. The asteroid might potentially carry inside assets of immeasurable value that will ensure humanity’s dreamed future amongst the stars… But we first need to survive its impact.

A joint statement signed by experts from OpenAI and Google’s DeepMind was released to the public in 2023. Their message was just 2 lines long:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

It’s hard to tell if industry experts are genuinely afraid or exaggerating their fears to convince politicians to create regulatory structures that complicate the entry for new AI players and help secure their current lead.

They could also be preventively announcing risks to the public in order to later claim they warned us and avoid being blamed for future AI accidents.

It’s impossible to know how dangerous AGI could really be. But we haven’t seen this many experts concerned about a global potential threat since the beginning of the nuclear arms race. Their concerns could well be unfounded, but they should never be dismissed.