I’m at the Ai4 conference in Las Vegas, and I’ve been reminded that the challenge of AI governance is huge and involves lawyers, academics, and politicians working on amazingly complicated processes.

Where are the AI engineers in the conversation?

Sure, some of the leading lights in the field have warned us that their work could destroy the world, but the rest of the conversation is usually about how everybody else should try to keep them from doing it.

We need to pause and contemplate this messed up arrangement.

Imagine if a company’s marketing or finance departments had to be similarly policed. It’s just assumed that people working in other corporate departments don’t want to purposely do bad things, whether by commission or omission. Businesses assume that their employees are virtuous, or at least that they aspire to do the right thing. They expect that their employees will take responsibility for their actions, and said employees live with that expectation of themselves.

I don’t think anyone worries about a marketer declaring “stop me before I run an ad that ruins the company,” or a business exec crying out for help to stop her from destroying a company’s finances. Bad decisions happen — picking the wrong marketing sponsorship deal, not getting the desired return from an investment — but these are bugs, not features.

Granted, these employees follow laws and industry standards and they have the advantage of working in established departments with operating histories on which to rely. Enterprise-wide AI development has no such proven guardrails in which to function.

The other departments also aren’t working with the digital corollary of plutonium.

But there’s something more at play here. I think we have unfairly low expectations for the moral capacity of AI engineers.

It’s not wholly our fault.

Those leading lights I noted earlier…you know, the guys (and they’re always guys) who are responsible for inventing A that could annihilate humanity? They take no responsibility for having done so. Technology is agnostic to morality, or so they claim, and their intentions are no different than those of the scientists who invented the nuclear bomb or Napalm.

Addressing a cool technical challenge absolves them of any culpability for its consequences. They’re just following orders, whether those commands come from superiors or their own internal voices.

That nonsense is magnified by a philosophy called Effective Altruism that many technologists like to quote. Its premise is that any problem can be solved by dispassionately looking at the relevant data and, therefore, an engineer is qualified to address the world’s greatest challenges and decide the costs/benefits of their solutions.

Data has been informing meaningful solutions to problems for centuries — just remember John Snow using rudimentary data analysis to trace the source of a cholera epidemic in the 1800s — but the idea that those problems are technical and not political, economic, social, even spiritual is, well, stupid.

It’s God complex going on with a little “if all you have is a hammer, everything looks like nails” confusion thrown in.

Does this mean that AI engineers have to be surveilled and supervised because they can’t be trusted?

I don’t buy it.

Every action has consequences, whether you can or choose to see them or not. It’s a fact. Engineers know this in their personal lives. They function in society and follow rules not only because they’re obligated to, but because they see the merits of doing so, both for their families and friends as well as for themselves.

Not every engineer is an Effective Altruism zombie. Most probably aren’t.

So, why aren’t the Powers That Be providing them with the recognition of their humanity and the tools for realizing those attributes in their work? Why aren’t we respecting them by asking for their moral awareness of the consequences of their actions? Couldn’t we devise processes that limited risk built into them, so projects didn’t need to be chased once they were released into the wild?

Where are the training programs and ongoing support to encourage them to be part of the solution to AI risk and not simply the unkempt, uncontrollable source of the problem?

Relying on hall monitors is an awful way to control behavior. An adversarial arrangement almost dares violations, or at least skirting the edges of what’s allowed. Hall monitors ultimately fail because policing doesn’t enforce but rather back-stops ethical behavior.

We need to talk more about an applied ethics for AI development and deployment.

Here’s a link to an approach to doing just that, called CyberConsequentialism. Please give it a read and tell me what you think!

[This essay appeared originally at Spiritual Telegraph]