European legislators adopted the draft of a “landmark” AI law last week that “ensures safety and compliance with fundamental rights, while boosting innovation.”

It simply ensures that the robot takeover will be orderly and unstoppable.

The Artificial Intelligence Act declares that AI should be restrained from doing things that people already do to one another: Deny basic rights and respect, manipulate opinions, and predict propensities for behaviors. Existing laws can’t stop such abuses of what the MEPs call “fundamental rights,” let alone their implications for applying for bank loans, getting or keeping jobs, or participating in elections.

Maybe holding AI to the same low standards (and providing similarly wiggly room for interpretation) will be an accomplishment?

But wait, there’s more: “High risk” AI systems — defined as pretty much any public AI that isn’t being used to sell us something — should not only be somehow responsible in ways that people aren’t, but rely on human oversight and respond to demands for details on their operation and complaints about their decisions.

Considering that the folks who are building today’s LLMs can’t explain how their creations learn to do things they weren’t coded to do, including “hallucinating” details, cheating at tasks, and showing signs of intentionality, these oversight and transparency components of the proposed law are pretty much dead on arrival.

The proposed legislation also contains the required blather about supporting innovation and commits resources at the national level to help AI developers build new things. And, in typical EU fashion, it promises a bloated supranational bureaucracy that will keep a handful of people employed whilst the AI revolution puts everyone else out of work.

“Safety and compliance with fundamental rights” is a MacGuffin intended to distract governments (and us) from this real employment and life-changing outcome of AI development.

People are being tasked with “partnering” with AI under the ruse that it will help them when they’re actually helping train the AIs to do their jobs. Robots are studying human movement on factory floors and behind the steering wheels of cars and trucks. Vast arrays of data are getting crunched so that machines can get better at making the decisions once made by human employees. Many thousands have already lost their jobs to AIs or, indirectly, become obsolete because workflows have been automated.

Many more will follow, and the AIs will keep getting smarter so they’ll always end up winning the competitions for jobs and authority.

On the other side of the equation are owners and providers of capital who benefit from increased productivity from workers who don’t need sleep, healthcare, or even physical form outside of a server box stuffed in a dark room. AI turns labor from a cost into an asset by streamlining the collection of profits that were formerly shared.

What happens to our work and personal lives? Don’t worry, say the people who will profit from the transformation. Everything will improve, however unevenly and spread over time spans that will likely be measured in generations, not years.

There’s no way to say what the new world will look like, nor define your role or your children’s’ role in it, but the profit opportunity is clear enough to warrant the investment of zillions of dollars.

Maybe the legislators who should wrestle with these facts are unable or unwilling to do anything about them. Maybe the people who’ll get rich from building robots are too powerful anyway. And maybe the rest of us are too stupid or beat up to realize that our problem isn’t that AI might treat us unfairly or unkindly, but rather that its most efficient function will be to take our jobs and automate our decision-making.

The EU’s Artificial Intelligence Act will only help ensure that the robot takeover is unstoppable.

But thank goodness it will be orderly.

[This essay appeared originally at Spiritual Telegraph]