Will AGI understand its answers?
But what does it truly mean for us humans to understand something? Do we really understand everything we believe we do? Don’t we often answer questions like AI, by predicting the most likely answer based on associations from our own ‘past data training’?
Do we understand “E equals MC squared,” or do we simply memorize it? How many of us truly grasp the majority of what we claim to know?
How many of us move through life thinking from first principles? How many actively question every assumption we think we ‘know,’ then proceed to create new knowledge and solutions from scratch?
In the end, AI might help us realize how little we actually understand, leaving us a little wiser—just as Socrates would have seen us.
This page's topic is:
Will AGI understand its answers?