But… Won’t scientists stop AGI when it becomes too dangerous?
Over the decades, the press has discovered ample evidence of scientists’ tendencies to exaggerate or downplay findings to accommodate the interests of those funding their research. The desire to make progress, be recognized, and compensated will make engineers downplay AI’s risks.
As long as those working on AI are humans, they will share our same positive biases and our same reluctance to internalize novel existential threats. Scientists are likely much smarter than most of us, but not necessarily more conscientious, empathic, caring, responsible or courageous than us.
Having recently all experienced a worldwide pandemic, let’s keep in mind that a Google AI “Lab Leak” could be far worse than a Wuhan “Lab Leak.”
A bat having sex with a pangolin will be a much harder sell for authorities trying to redirect blame for an AI global pandemic.
This page's topic is:
But… Won’t scientists stop AGI when it becomes too dangerous?