Four-fifths of America’s state legislatures are contemplating AI-related bills, according to this story from Axios. It’s not good news.
Many of them are worried about deepfakes, which use the “deep” tools of machine learning to create convincingly “fake” visuals and sounds. The opportunities for misleading and manipulating people with this tech are endless.
But new laws?
For starters, there’s no such thing as a completely “true” picture, video, or audio clip. Any media artifact is self-contained and self-referencing; when we consume it, we’re not privy to the intention of the creator or the context of its creation.
What was happening just beyond the frame of the picture? What did the interviewee say just before the audio segment or reported quote? Why was this video shot in so-and-so place and in this way versus that?
All content we consume is embedded with purpose, whether intentional or not. It’s real stuff, but it’s never entirely without qualities that might make it less so.
Second, people have been using technology to create truly fake content for generations.
Ghostly fairies were superimposed above contemplative subjects as proof of their existence. Comrades appeared and disappeared from Soviet-era propaganda photos as their fortunes waxed or waned. There’d be no movie business without tools to make things seem real.
Peter Pan can’t really fly. Neither can the Millennium Falcon. And Arnold Schwarzenegger isn’t a metallic robot under his skin.
Third, our airwaves (and cables) are filled with lies that have no overt dependence on technology beyond relying on it for propagation. Politicians make misleading statements. Businesses suggest imaginary benefits from use of their products. Online influencers pocket money for claiming to love things.
Fourth, all of this comes despite many laws intended to regulate the truthfulness of our interactions.
AI hasn’t created this rotten situation. It just promises to make it rotten-er.
I’m not optimistic that laws passed by state legislatures will do much if any good. The wording will be difficult, considering the Constitution protects our right to free speech irrespective of its accuracy. Also, one person’s fake is another’s truth. Defining what AI will and won’t be allowed to do sounds like a daunting task.
Tasking it to lawmakers who may or may not even have a full understanding of AI or tech in general sounds even less promising.
And then’ll come the challenges of implementation and enforcement. How will lying via AI be handled differently from, well, lying via some old-fashioned means? What agency or department will make those decisions? Will one state have jurisdiction over deepfakes originating in another state?
The Internet is kinda everywhere, last time I checked.
If there’s one bit of good news in all of this, it’s that the folks developing and selling AI must be thrilled. Piecemeal gestures will never add up, and the real public policy issues are mass unemployment and surveillance that threatens to control our lives.
The risk of AI is not that it’ll produce more fakes but rather that it will determine for us what’s true.
It would be great news if states were stepping up to this challenge, especially since the Feds are already in Big Tech’s pocket.
[This essay was published originally at Spiritual Telegraph]