Google’s former CEO Eric Schmidt has joined Sam Altman, Elon Musk, Geoff Hinton, and a host of other lesser-known experts in sounding the alarm on ‘existential risk’ from AI.
They follow in the footsteps of Victor Frankenstein, the artificial life pioneer who was shocked when he saw the ugliness of his creation over 200 years ago:
“Now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.”
Like his counterparts today, Frankenstein had set out to do something cool, something intellectually challenging. He was in love with the potential for discovery and his own capacity for doing it:
“None but those who have experienced them can conceive of the enticements of science…but in a scientific pursuit there is continual food for discovery and wonder.”
Once he realized that he could create AI, he had a very brief crisis of confidence:
“I doubted at first whether I should attempt the creation of a being like myself, or one of simpler organization; but my imagination was too much exalted by my first success to permit me to doubt of my ability to give life to an animal as complex and wonderful as man…I doubted not that I should ultimately succeed.”
Convinced he was a genius, he decided to unleash his creation on the world in a Victorian-era open source experiment:
“I prepared myself for a multitude of reverses; my operations might be incessantly baffled, and at last my work be imperfect, yet when I considered the improvement which every day takes place in science and mechanics…”
The parallels between Mary Shelley’s fictional AI inventor and the real ones today are shocking and illustrative.
All of them suffer from hubris, and each believes that they are somehow smarter or luckier than the other. Or maybe just special, generally speaking.
They try and fail to fix the problems they create with more tech. Frankenstein tries to mollify his creation by building a second creature as its bride but then backs off because he doesn’t want to create a race of super AI. Relying on today’s AI to somehow police itself is equally doomed.
As that approach fails, all of them default to regulation, whether via angry villagers armed with torches or Congressional hearings.
And, throughout it all, they somehow believe that they’re blameless, and that any negative or catastrophic effects of their creations are not their responsibility.
This is because they mistakenly believe that intentions can be ethical even if the outcomes aren’t. While there’s serious philosophical debate over this question, most AI innovators equate ignorance with innocence.
“Not caring” or “not understanding” is not the same as being “not responsible.”
As I’ve said before, if they’d created COVID and unleashed it on the world, they’d all be in jail.
But since AI can be used for entertainment and companies can fire human workers and use it to answer customer queries, among other profit-making endeavors, any bad outcomes are bugs, not features.
At the end of Shelley’s novel, it’s Frankenstein’s creation that’s overcome with sorrow, not its creator. The human being gets rescued and his AI banishes itself from human society and is never seen or heard from again.
I don’t think today’s AI will be so magnanimous.
[Originally published at Spiritual Telegraph]