How alarmed should we really be?
A joint statement signed by experts from OpenAI and Google’s DeepMind was released to the public in 2023. Their message was just 2 lines long:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
It’s hard to tell if industry experts are genuinely afraid or exaggerating their fears to convince politicians to create regulatory structures that complicate the entry for new AI players and help secure their current lead.
They could also be preventively announcing risks to the public in order to later claim they warned us and avoid being blamed for future AI accidents.
It’s impossible to know how dangerous AGI could really be. But we haven’t seen this many experts concerned about a global potential threat since the beginning of the nuclear arms race. Their concerns could well be unfounded, but they should never be dismissed.
This page's topic is:
How alarmed should we really be?