Will Unplugged AI Go To Heaven?
Origin stories aside, maybe the distinctions we make between what’s living and what’s not are more fluid, like the spiritualists have long contended.
Origin stories aside, maybe the distinctions we make between what’s living and what’s not are more fluid, like the spiritualists have long contended.
The latest news from the White House only goes to show that it doesn’t understand AI risk, or is unwilling to do something meaningful about it.
AIs studying one another is like a giant digital game of telephone in which a statement is modified every time it’s shared.
By making our lives more convenient, the Internet has made it easier for AI to study us.
A recent survey said that we’re slightly pessimistic about AI’s impact on the world even as we imagine it will invent new medical treatments and “economic empowerment” (whatever that means). More than a third of us expect AI to wipe out human civilization.
Experts warn of imminent destruction and corporations are rushing to invest in its cause. What are we missing about AI?
We’re being told that we must tolerate the possibility of deadly weapons in order to enjoy power generation.
Turns out that the people in “Responsible AI” aren’t responsible for AI.
Science provides descriptions. It reports the ways things work. It’s repeatable and reliable, and it has given us endless services and conveniences. It is undeniably accurate and real. But it doesn’t tell us how things work, let alone why.
Google’s position on AI risk is that their work raises lots of problems for us. It’s like Alfred E. Neuman wrote the policy. “What, me worry? It’s YOUR problem.” Only it’s not.