Will AGI seek to harm us?

The possibility that an artificial superintelligence will actively choose to destroy humanity seems to me as an extremely unlikely scenario.

A person or entity with higher levels of intelligence will also generally display a higher understanding, respect and empathy.

I trust that in the long run; AI will prove to be a positive force for humanity.

We should avoid the fear mongering doomsday peddlers, and instead be concerned our population might not adapt as fast as AI’s development.

We should be concerned by the unintended consequences derived from failures of imagination and accidents in implementation. Afraid of generalized lack of legislation, corporate abuse and the occasional terrorizing initiatives of crazy fanatics…

We could spend years fearing AGI, only to discover our threats were always other humans using the newest technology.