We’ve been told that it’s important not to export our blind and cruel biases to the AI we create.

It is. But it’s nowhere near the real problem.

After all, those biases are already a fact of life. We humans have no problem imposing them on one another every day, all of the time. No technology required.

The promise of AI is that it will be better than us — smarter and faster without the need for sleep or job satisfaction — so making it less biased than we are is a worthy goal.

But it distracts us from the problem we should be discussing.

What happens when AI can do our jobs? Beyond our work, what about our definitions of personal empowerment and accomplishment if more and more tasks are done by those blissfully unbiased robots?

What happens to company operations and valuation, the function of markets, and the way governments are run when AI does most of the heavy lifting? What about our beliefs in our own uniqueness when robots can talk and act just like us?

Bias is the least of our worries.

Is the idea that the AI takeover will be palatable if it’s more benign than our own feeble efforts at governing ourselves and living our lives? The technologist believers and commercial interests want us to focus on the benefits of such a benevolent dictatorship. It’ll allow companies to lower their costs by firing people and make more money with better analyses of their customers’ interests.

It’s an incomplete description, since AI will infiltrate even the smallest, most inconsequential choices and actions we take. AI will influence billions of everyday decisions, and we won’t be able to distinguish it from the actions of a real person.

Each of us will get a digital personal assistant that comprehends spoken commands and smoothes the rough edges of doing errands, making appointments, and buying stuff. We won’t have to think our biased and cruel thoughts for ourselves

So I get the idea that AI should have less of it.

But shouldn’t we be talking about the merits of giving up jobs, our personal autonomy, and our understanding of what it means to be human instead of fine-tuning how good AI will be at running things?

AI is the problem.