Existential AI Risk Is Right Now
Existential AI risk isn’t something that’s far off in the future or limited to some evil robot destroying mankind, contrary to what many of the leading lights in AI innovation have told us.
Existential AI risk isn’t something that’s far off in the future or limited to some evil robot destroying mankind, contrary to what many of the leading lights in AI innovation have told us.
We need better understanding of the context of AI and digital transformation. News Corp has raised its hand and asked us to witness its success in thrilling its investors. It could be an opportunity to judge whether or not that’s good news for the rest of us.
I think we have unfairly low expectations for the moral capacity of AI engineers.
How will you feel when an AI can convince everyone you know that it’s you?
Much of the talk about AI includes one or more euphemisms that substitute pleasant or unthreatening definitions for more truthful and scary qualities.
Origin stories aside, maybe the distinctions we make between what’s living and what’s not are more fluid, like the spiritualists have long contended.
The latest news from the White House only goes to show that it doesn’t understand AI risk, or is unwilling to do something meaningful about it.
AIs studying one another is like a giant digital game of telephone in which a statement is modified every time it’s shared.
By making our lives more convenient, the Internet has made it easier for AI to study us.
A recent survey said that we’re slightly pessimistic about AI’s impact on the world even as we imagine it will invent new medical treatments and “economic empowerment” (whatever that means). More than a third of us expect AI to wipe out human civilization.