How should we really be preparing?

For centuries, nations have engaged in “military exercises” to prepare for potential future threats. U.S. forces have simulated scenarios ranging from a Chinese invasion of Taiwan to a full-scale nuclear attack from Russia, aiming to evaluate possible defensive tactics and strategies.

Today, however, we are blindly entering humanity’s largest societal experiment without any significant “governmental AI threat assessments.” I’ve spoken with dozens of top global experts, and none of them knows of any organization currently working to model AI’s societal impact.

AI experts are focused on the most obvious repercussions: fake news, algorithmic bias, job displacement, and the need for universal basic income. Yet, our experts don’t have many more answers. Worse still, most don’t seem to have many more questions.

In a future where AI will provide most of the answers, knowing the right questions to ask will be our most valuable asset.

Governments and corporations must urgently create an independent organization, bringing together interdisciplinary experts to develop a comprehensive model of AI’s future societal impact. It frustrates me deeply to see that this is not already happening.

This book seeks to make a difference in a world where authorities appear to be asleep at the wheel.

While this attempt to predict future threats could still prove futile, as the great mathematician and philosopher Henri Poincaré warned: “It is far better to foresee, even without certainty, than not to foresee at all.”