Leading companies in AI met at the White House last week and promised not to destroy the world as they took it over. It was a voluntary commitment. No strings attached.
What’s the point of the deal? This news story expressed the contradiction behind the pledge:
“The latest commitments are part of an effort by President Biden to ensure AI is developed with appropriate safeguards, while not hindering innovation.”
Of course, safeguards hinder, by definition. And ensuring something means making something certain, if not guaranteeing it.
The White House is using an old approach for a new problem.
The Motion Picture Association (MPA) was founded early in the life of the film industry to preclude Federal government regulation. When local governments started censoring movies, the industry association created guidelines and a rating code to try and avoid them. It worked, for the most part, freeing moviemakers to meter doses of saucy stuff while preserving its free rein on other attributes of content, such as violence.
The issue then was lewd content. Today’s issues are false content, along with biased, discriminatory decisions and an overall loss of privacy.
Signatories of the White House promise to develop a way for consumers to identify AI-generated content (often described as a “watermark”), which I presume will give us an a priori heads-up on whether or not the stuff is true. This will be in contrast to all of the human-generated lies that get thrown at us daily without any such warning.
If the AI industry has its way — its innovation isn’t hindered — all content will have some element of AI incorporated into it, so the watermark will be ubiquitous. So much for consumer protection.
As for the other potential bad consequences of letting AI run our lives, well, the deal says that the AI makers will “prioritize” and “guide” pathways to addressing them. This includes “developing” ways to “help” mitigate societal challenges like disease and climate change.
In other words, AI will support life, liberty, and the pursuit of happiness.
The blather echoes the nonsense AI makers have been promoting at the Vatican, agreeing to uphold moral principles as their products enable potentially immoral outcomes.
Of course, none of these highfalutin statements come anywhere close to acknowledging or addressing the real issues we’re facing with AI, like wholesale job destruction and empowering governments and institutions with actual super-human capacities for surveillance and manipulation.
They make the problem of deepfakes seem almost silly by comparison.
The statements also aren’t likely to stop some future government regulation, if only because the EU is likely to pass rules that will force global compliance. They won’t be enough but they’ll be much more than anything we can expect from a US administration that puts innovation over safety or preserving our quality of life.
The latest news from the White House only goes to show that it doesn’t understand AI risk, or is unwilling to do something meaningful about it.
And that’s part of the risk.
[This essay originally appeared at Spiritual Telegraph]