The leading lights behind AI convened in DC last week to educate lawmakers on how they intend to get rich changing our lives, and that there’s nothing we can do about it.

If past public statements are any indication — the hearings were closed-door, which is a genius move when you want to encourage conspiracy theories — the tech savants likely delivered a bifurcated AI risk model that presents two extremes: On one hand, there are risks that AI does it job unfairly or imperfectly…it’s biased, stomps on our privacy, and tees-up imaginary sources when kids have it do their homework.

On the other hand, the risk is that an AI will decide to annihilate humanity sometime in the future. This will also be the result of some coding glitch.

This leads to the assumption that government should try to regulate both extremes.

It’s a fool’s errand.

AI is too distributed, the technical details too confounding, and its negatives and threats are often not apparent until the tech has been set loose in the field. There’s no way even the biggest, most well-funded regulatory bureaucracy could stay on top of AI development and deployment.

Worse, it couldn’t make good or reliably consistent decisions if it did, since the benefits of every technical innovation come with drawbacks or costs. Just imagine the gloriously confounding assessments an AI bureaucracy might make on what is or isn’t “worth it” to society, or what amount of pain, suffering, or risk is “acceptable” for some greater good?

After all of the forms and submissions and analyses and conversations and certificates and fees, do they really think that their intentions, however good and sincere, would result in better and safer AI?

Of course not. And that’s the point.

The businesses behind AI stand to make zillions on the technology. So, their agenda is to define and thereby limit the obstacles government might put in front of the speed and scope of that inexorable outcome.

The resulting regulation will be nothing more than a nuisance tax on AI, and the government has asked a bunch of Al Capones to help it write the tax code.

It’s the right answer to the wrong question.

The real risk of AI isn’t that it does something wrong or imperfectly, but rather that it does exactly what it’s intended to do: Replace human effort and productivity with machines. It’s something technology has done for generations and we’ve yet to figure out how to talk about it, let alone ensure that it does more good than harm.

Remember, every technology comes with benefits and costs. The devil is in those details…who is impacted, how, when, and for how long being key variables that yield different conclusions from the same data.

There is ample evidence that AI will utterly transform the job market, our definitions of IP and ownership, and even how we see and experience personal autonomy.

What’s the cost/benefit of putting millions of people out of work? How is copyright protected when LLMs regurgitate it, or when the first AI claims it? Who’s liable when an autonomous car gets in an accident? What are patient options when medical care is rationed based on AI models of survivability?

These aren’t functions that need to be refined or failures that should be preempted. They’re all features of what AI will do and what its builders and users will rely on to get very, very rich.

They’re using a tried-and-true playbook to push off attention to these topics for as long as possible, hoping that the circumstances of adoption will make addressing them pointless.

Just look at what happened to us with social media. It’s like the world caught a collective disease that wasn’t diagnosed until it was too late to do anything about it.

The fix is in.

[This essay originally appeared at Spiritual Telegraph]