AI researcher Geoffrey Hinton won a Nobel Prize earlier this month for his work pioneering the neural networks that help make AI possible.
Also, he believes that his creation will hasten the spread of misinformation, eliminate jobs, and might one day decide to annihilate humankind.
So, now he’s madly working on ways to keep us safe?
Not quite. He says that he’s “too old to do technical work” and that he consoles himself with what he calls “the normal excuse” that if he hadn’t done it, someone else would have.”
He’s just going to keep reminding us he regrets that we might be doomed.
I guess there’s some intellectual and moral honesty in his position. Since he didn’t help invent a time machine, he can’t go back and undo his past work, and he never intentionally created a weapon of mass destruction. His mental capacity today at 75 is no match for the brainpower he possessed as a young man.
And he gave up whatever salary he was getting at Google so he could sound the alarm (though he’ll likely make more on the speaker’s circuit).
History gives us examples of other innovators who were trouble by and/or tried to make amends for the consequences of their inventions.
In 1789, an opponent of capital punishment named Joseph-Ignace Guillotin proposed a swiftly efficient bladed machine to behead people, er, attached to recommendations for its fair use and protections for its victims’ families. He also hoped that less theatrical executions would draw fewer spectators and reduce public support for the practice.
After 15,000+ people were guillotined during the French Revolution, he spent the remainder of his life speaking on the evils of the death penalty.
In 1867, Alfred Nobel patented an explosive using nitroglycerin called “Nobel’s Safety Powder” – otherwise known as dynamite – that could make mining safer and more efficient. He also opened 90+ armaments factories while claiming that he hoped that equipping two opposing armies with his highly efficient weapons would make them “recoil with horror and disband their troops.”
He created his Peace Prize in his will almost 40 years later to honor “the most or the best work for fraternity among nations.” While the medal has been awarded annually ever since, there’ve been no reports of opposing armies disbanding because their guns are too good.
In 1945, Robert Oppenheimer and his Manhattan Project team detonated the first successful nuclear weapon, after which he reported quipped “I guess it worked.” Bombs would be dropped on Hiroshima and Nagasaki about a month later, and Oppenheimer’s mood would shift, telling President Truman that “I feel I have blood on my hands,” and he went on to host or participate in numerous learned conclaves on arms control.
No, I’m not overly bothered that Geoffrey Hinton follows in a long tradition of scientists having late-in-life revelations. What frightens and angers me is that the tradition continues.
How many junior Guillotins blindly believe that they can fix a problem with AI without causing other ones? How many Nobels are turning a deaf ear to the reports of their chatbot creations lying or being used to do harm?
How many Oppenheimers are chasing today’s Little Boy AI – a generally aware AI, or “AGI” – without contemplating the broad implications for their intentions…or planning to take any responsibility for them, whether known or as-yet to be revealed?
You’d think that history would have taught us that scientists need to be more attuned to the implications of their actions. If it had, maybe we’d require STEM students to take courses in morals and personal behavior, or make researchers working on particularly scary stuff submit to periodic therapeutic conversations with psych experts who could help them keep their heads on straight?
Naw, instead we’re getting legislation intended to make sure AI abuses all of us equally, and otherwise absolves its inventors of any culpability if those impacts are deemed onerous.
Oh, and allows its inventors like Mr. Hinton to tell us we’re screwed, collect a prize, and go off to make peace with his conscience.
Stay tuned for a new generation of AI researchers to get older and follow in his footsteps.
And prepare to live with the consequences of their actions, however much or little they regret them.
[This essay appeared originally at Spiritual Telegraph]