Two recent events involving AIs hallucinating lead to a terrifying conclusion.
In the first case, a chatbot used by Air Canada promised a refund to a customer in violation of the company’s refund policy. The customer sued, and the company claimed that the AI was “a separate legal entity that is responsible for its own actions.” The court sided with the customer (and the chatbot was promptly fired).
In the second instance, Google’s Gemini AI invented racially diverse images of Nazi soldiers and America’s Founding Fathers. Political warriors claimed “wokeness” and Google said it was a programming error.
Now to the terrifying parts.
One of the main reasons Nvidia and the leading AI developers are raising (and making) so much money is because of the promise of replacing human beings in the workplace. AI will do the work of tens of millions of people, though without the management headaches or expectations for healthcare.
Companies are spending billions in order to reap billions more in productivity and profitability benefits. Holding them responsible for the actions of their bots will significantly slow that transformation, while opening up executives and board members for potential liability exposure
Some wags suggest that excitement about AI adoption is responsible for keeping the stock market healthy. Slowed adoption could hurt that overall performance, not to mention add costs for liability protections to the fevered dreams of individual businesses’ plans to install bots where people sit.
What’s even scarier is the possibility that AIs might never become more reliable or fair than people. This will render moot many of the promised social benefits of AI and put at risk the scientific ones, too.
Do you want to use a new drug that was “tested” by an AI that might have fudged the development process sue to some “error” in its code? How about trusting your autonomous car to make the safest decisions at every instant?
Just think of all the school kids who will write papers quoting historical figures who never said (or looked) the way AI cites them. Oh, wait, AIs will wrote those papers, too. Never mind.
Still, the worry isn’t AI put to nefarious uses as much as its inability to perform truthfully and reliably for otherwise innocent, everyday uses.
You can just imagine AI evangelists claiming that there are no problems here, just hiccups. Every observed imperfection yields provisions to resolve it along with precluding dozens that haven’t happened yet. AI will always get better so there’s nothing to worry about.
What I find most terrifying is that all of us have been enlisted as subjects in this grand experiment. We aren’t informed about what’s happening, don’t possess the knowledge to understand or assess it, and haven’t an ounce of agency to do something about it.
The development process will never end. The experiment will always continue.
AI growing pains are the shape of things to come.
[This essay originally appeared at Spiritual Telegraph]