A tech evangelist group has publicly asked the US government to do more to regulate AI. It’s a charade that is late and inconsequential.
According to this news story, a group called the BSA, which maybe stands for Business Software Alliance, thinks that government should tell companies how and when to evaluate using AI and then monitor that compliance.
Good luck with that.
First off, BSA is sponsored by a who’s who of tech companies that sell AI and other tech tools. It’s an industry trade group that exists to promote and protect the interests of its members, and its 2023 BSA US Policy Agenda “outlines the steps policymakers can take to further digital transformation.”
Advocating for their interests in totally kosher, of course. But where’s the equally organized and well-funded advocacy group fighting for the interests of, I don’t know, humanity?
Do you want to further digital transformation? Oops, your opinion doesn’t count.
Second, staying true to its purpose, the BSA is calling for “protections” for tech companies, not consumers. Don’t be fooled by its expertly-written language.
The CNBC story notes “four key protections” that involve Congress defining and then evaluating AI that will be used for making “consequential decisions,” a requirement for developers to manage those risks, and creating a regulatory body to oversee compliance.
Even worse, it’s really about trying to force the government to identify what it won’t regulate which, when it comes to complex ever-evolving technologies like AI, means the rules will be locked in place while the circumstances change. It will also help indemnify AI makers from lawsuits arising from the use of its products.
It’s a cluster wrapped in a fiction inside a bureaucracy.
Separating consequential from pedestrian AI decisions is the stuff of an ethics debate, not a Congressional hearing.
Is it consequential to rely on AI to keep your car from crashing or preserving your health, and then letting it monitor and manage your conduct and decide whether or not to cover the costs of your failures or shortcomings?
Is it consequential to let AI control what news and opinion media are shown to you? How about creating it, or determining and prioritizing your Internet search results?
Is it consequential when companies replace human workers with robots? Should there be a value (or cost) assigned to those efforts so you can make your buying and investing decisions accordingly?
How about adding up all those pedestrian, everyday uses of AI that make our lives more convent? Is it consequential that it changes how we interact with the world and one another, or how it impacts our understanding of our own agency and purpose?
The risks associated with these activities go far beyond biases and operational imperfections. Focusing the regulatory conversation on the technicalities of its delivery is a purposeful effort to avoid facing such bigger questions.
And there’s nobody with a bullhorn as large and powerful as the tech lobby advocating for them.
Also, the BSA wants its recommendations added to the American Data Privacy and Protection Act (“ADPPA”) which may never become law but embeds their mercenary intentions in blather about protecting consumers and their privacy.
There is no privacy anymore. That train left the station a long time ago, before anybody was looking, and government regulation can only chase after it.
AI relies on data, much of it found on the Internet according to this story. There’s 30+ years of content that we human beings posted online for AI to study for free.
And every time a notice pops up on a web page asking for us to accept cookies provides only the slightest simulacra of privacy protection. AI continues to scrape our behavior when we visit sites, what we do there, and what we do next. Our cars are capturing our driving habits. Our digital assistants and home and on our phones and wrists have memories that are better than ours.
The BSA’s recommendations won’t change any of this, and it advocates for government involvement to help its member companies stay the course. It’s about protecting them, not us.
It seems kinda dumb that we’re making AI smart without thinking about what it means for our lives.
That seems like a consequential decision to me.
[Originally published at Spiritual Telegraph]