Every assumption about AI ethics and potential regulation is based on our ability to understand how a system works and thereby identify and manage the implications of its operation.

Only doing so is impossible, by definition, and there’s a name for why:

Impossibility Theorems.

The language of this white paper is “at the level of lawyerese” to ensure “wide readability,” which means I got maybe a fourth of it, but that was enough to grasp the idea that there are impossibilities inherent in the models for what’s possible in AI oversight.

Those impossibilities get more real and likely as the systems get smarter. Here are just a few of them:

Uncontainability, which is our inability to prevent harmful actions because we can never have 100% visibility or certainty into the harmful properties of complex programs.

It’s related to Unexplainability and Incomprehensibility, which describe the impossibility for users or a system itself to be 100% accurate in the explanations for AI’s decisions.

Unpredictability states that we can never precisely and consistently predict what actions an smart system will take, along with Unsolvability of reward corruption, which means that we can’t figure out what went wrong in how an AI trained and encouraged itself to reach so-and-so conclusions.

This intriguing perspective is a mishmash of the math behind Gödel’s incompleteness theoremsand social behavior models created by Kenneth Arrow. It directly challenges the assumptions that we and our AI creations will consistently get better at observing, identifying, and managing the impacts of complex systems.

No, the unknowable ghost inside AI will get more powerful and present. The actions taken by machines that can learn and function independently of direct commands will be impossible to predict with 100% certainty.

Again, the language of the white paper is dense, but the writers behind the recent one-season Peacock streaming marvel that was Mrs. Davis either read or channeled the thing: [Spoiler Alert] A global AI that monitors and controls all human activities to ensure that everyone is “satisfied” emerged from a customer service app coded for Buffalo Wild Wings.

The show, and the white paper, are testaments to a long and uncertain road to global domination or destruction, of course, but both suggest that reliably managing AI’s journey won’t just be difficult to do.

It’ll be impossible.

[This essay was originally published at Spiritual Telegraph]