There are a number of initiatives underway to regulate AI development and use. They don’t address the fundamental issue of responsibility.

It might sound like a fine point, but it’s not: Compliance with rules establishes lack of responsibility for activities not thereby identified. That’s why we’re hearing so much from tech promoters practically begging for government intervention.

They want regulators to define what they’re going to regulate. Everything else that AI can or might do will be outside of that scope. This will include every big or meaningful impact of the technology.

The regulations won’t be able to keep pace with what AI could or might do once it’s operational in the wild, learning and then choosing new ways to realize its goals.

That’s the point of AI, really, and why regulating it isn’t the same as other products that must pass safety tests before they can be sold or used. A car stays a car after it rolls off of the production line. Same goes for toasters, building materials, and food products (spoilage notwithstanding).

Even if its creators and users try their very best to be responsible and not build nefarious or dangerous uses into an AI design, they can’t necessarily control everything that might happen once it gets powered up.

AI changes. There’ve been many examples of it knowing and doing things that researchers can’t explain. This isn’t news, as developers have known for many years that they couldn’t explain how some advanced algorithms functioned (it doesn’t help that AI is built on neural network tech and thinking which tries to mimic how human brains work…which we can’t explain, either).

Additionally, the idea of responsibility is based in large part on connecting cause with effect. How do you do that if an AI is executing zillions of commands across vast arrays of new data that were never specified by its original coders?

There are ways to address these issues and most of them involve reducing project scope and complexity. AI that did more with less would be inherently more predictable. Projects with more explicit functional goals would be less likely to create unintended consequences.

Responsible AI wouldn’t rely on good intentions or mere compliance with regulations, but on designing responsibility into the technology builds themselves…and then sharing those details fully and regularly over time.

Developers and users would have to assert their responsibility for their work. But that’s not how “responsible AI” works.

Charley Johnson has written a very compelling essay on this topic.

[This essay appeared originally at Spiritual Telegraph]