“Oh, so you’re the ethics evangelist,” one of my fellow panel participants sneered and then walked past me toward the stage.

We were at a conference in Amsterdam to talk about AI ethics and what businesses could do to implement them. Within moments of the panel’s start, this expert and another one declared that there was no such thing as AI ethics.

The sneer was prelude.

You see, AI is just another technology invention, no different than a car, so the challenge is simply one of compliance. Rules exist to govern manufacturing, employment, and sales of products. Talk about anything else is a distraction, orchestrated by the tech titans so they can pursue their goals of world domination while we’re busy trying to answer unnecessary questions.

World domination by making another gizmo. Sounds like AI Is a lot more than another tech tool, but whatever.

I tried to push back, as did other panelists. Sure, AI is a technology and, like looms and cars, it’s supposed to make our lives easier. But AI does it by getting smarter. Therefore its capabilities and potential applications are quite literally infinite.

“I don’t want my car deciding to trade stocks in my portfolio, or my light switch killing me,” I quipped, thinking I was being funny to illustrate a point.

“They can’t do that,” my sneering co-panelist announced.

The thing didn’t go downhill from there. It just never got any better.

I’m all for incorporating AI into the existing frameworks of compliance and self-imposed behaviors that guide companies these days; in fact, I think we need to stop thinking and talking about it as some separate, stand-alone thing for which we need to invent totally new theories, processes, and procedures.

The Status Quo kinda feels like a make-work gift for management consultants and overzealous bureaucrats and, as the outcomes of social media tech painfully show us, failing to apply our “old” insights and expectations to new tech doesn’t obviate the former, it just means we’ll suffer the consequences of ignoring them.

And, just look at what corporations have done to the legitimate need for sustainability, especially on the environmental and DE&I fronts. Both have become predominately marketing campaigns.

But the conversation about AI shouldn’t stop there, even if it did at our conference.

“We need to use AI as the prompt for more honest and wide-ranging discussion about ethical business practices,” I said, as we thankfully approached the end of our panel’s time. I ignored the snarly looks from the panelists who I presumed thought otherwise, and kept talking so they couldn’t slam me.

I said that I believe there are three levels of ethical issues that touch on AI, and they all incorporate business behaviors overall:

First, there’s the question of equity and inclusion. It’s easy to see the need for smart tech to accommodate diverse users in diverse settings, not to mention avoiding penalizing them therein. It’s also smart business practice.

But it gets thornier when you try to operationalize it — who gets included or denied participation in its making or use? — as well as, in the case of many companies, the efforts to sell its good intentions greatly exceed their success resolving the aforementioned complexities of implementation.

Where does social engineering meet AI development and deployment, not to mention support it? What happens when an operationally inclusive tech hits the market at a price that excludes certain users?

Moral absolutes, no matter how passionately people might think otherwise, are bad dictates for management strategy, insomuch that they change over time and are far more subjectively true than objectively enduring.

That’s why they’re ethical questions that require ethical decisions.

Second, there’s the question of misuse or purposeful harm. Again, companies have to follow the law and their operations are regulated, for the most part. This is particularly true for asserting liability for damages a product might cause.

Looms and cars need to meet explicit standards for reliable performance, with no ethical debate required. User misuse, for whatever reason, is not the maker’s problem as long as the fault couldn’t be predicted by an element of design.

But AI is all about learning, iterating and, in demonstrable instances, inventing crap out of thin air. Who’s responsible when AI does something that it shouldn’t have done? Coders? Users? What about deciding what aspects or amounts of this novelty are acceptable, and to whom, is a dicey ethical conundrum.

I live in fear that technologists inspired with some nonsense belief that their coding prowess equips them to resolve those dilemmas.

Third, ethical questions arise when we ponder living in a world wherein AI does everything its proponents promise it’ll do.

On a micro level, there are questions of judgement that most advanced AI will be forced to make. Take an automated car presented with an unavoidable choice between injuring its occupants, those of other cars, and/or pedestrians. Or what about the smart electric utility that must choose which customers will suffer service interruptions if demand is too great?

Even if these decisions could be reduced to rules, they’d rely on some serious ethical consideration.

On a macro level, there are all of the impacts of AI on jobs (i.e. destroy them), personal autonomy (manage it), and our individual senses of selfhood and human uniqueness (challenge it, to say the least).

These are all ethical questions that are never raised at conferences like the one I attended last week, and they seemed to be uninteresting to the snarkier members of my panel. 

I suspect that’s because they make a living burying their expert heads in the sand of dissolute obfuscation so that their paying clients can keep their eyes squarely on the prize they seek. They’re apologists for the AI Status Quo.

Pretending that ethical questions don’t exist won’t make them go away. Sneers don’t replace thoughtful dialogue.

Yeah, I make no apologies for being an ethics evangelist.

[This essay originally appeared at Spiritual Telegraph]