If AI is so powerful that it will transform every aspect of how we work and live, and maybe destroy us in the process, I think we should stop talking about it in euphemisms and cuddly code words.

Our choice of language frames both our understanding and our capacity for taking actions based on it. Using coded euphemisms puts up a barrier that keeps the uninitiated from grasping what’s really going on.

It’s what helped enrich the founders of social media technology at the expense of our social and mental health. A quick Internet search will remind you how foolish US government types looked when they tried to ask questions about it.

You can’t regulate what you don’t understand, so perpetuating that misunderstanding precludes regulation.

Much of the talk about AI includes one or more euphemisms that substitute pleasant or unthreatening definitions for more truthful and scary qualities. Here are just some of the ones that irk me the most:

Alignment Risk — Let’s say you want your AI to recommend a diverse reading list of 20th century literature only it chooses to give you titles about torture. That lack of “alignment” is a risk, and as AIs get smarter and act more independently, the risk increases, perhaps dangerously so. A more accurate term for this would be Rogue AI.

Ghosts — This is a reference to AI impersonating people, living or dead, and it obscures questions about IP and copyright (who owns the content therefrom created?), privacy and self-determination (what happens when an AI shows up in your fav chatroom posing as you?), and propriety (is it OK that a robot can impersonate, say, your dead grandmother?). A better term would be AI Identity Theft.

Hallucinations — AI can make things up, like cite resource materials that don’t exist, as well as come up with imperfect answers on things like directions and recipes. These are not bugs but features of the way AIs go about collecting data and presenting data, and have immense implications for any function that requires accuracy (like scientific papers on which new drugs are based, or news that could determine public policy). New term: Lying AI.

Instrumental Convergence — This gibberish-worthy term is intended to cover-up the possibility that a super intelligent machine could come up with its own will to live, or decide that there should be no parameters on what it’s allowed to do. Or both. The example most commonly used for it is called the Paperclip Maximizer thought experiment, in which an AI is given the task of efficiently making paperclips and decides the best way to do it is to annihilate mankind. A better term would be Mass Murder AI.

Jailbreak — It turns out that there’s literally no way to enforce or ensure that a performance guardrail built into an AI can’t be hacked, thereby letting whatever demon the user imagines lurks in the machine stage a jailbreak. This means we have no security from enabling it to do things like guess passwords or provide plans to build an atomic bomb. A more accurate term for this quality of AI would be Dangerous or Uncontrollable AI.

Model Collapse — So, it turns out that there’s not enough data created by humans to keep training AI, so either they invent their own or hit a wall and, quite literally, lose their minds. A model that “collapses” comes with immense risks ranging from providing inaccuracies to overtly causing harm. New term: Insane AI.

Temperature — Imagine if you could dial up or down the conceptual connections and range of word choices of people talking to you. That’s what the temperature slider does in ChatGPT. You can turn randomness up or down and thereby change the very nature of what you get. This is a far cry from an AI that delivers objective or reliable truth. A more accurate term might be AI Censor.

Finally, putting words like “empathy” and “intelligence” in front of “artificial” is a very common ruse.

I have no idea what they mean. Empathy means identifying and understanding someone else’s feelings, so an artificial simulation wouldn’t actually be the same thing. Similarly, intelligence requires knowledge, and knowledge requires a state of knowing, but AI only possess, process, and apply data.

These terms are oxymorons posing as descriptives.

Funny enough, there’s at least one AI online that will invent an euphemism for any words you choose. So, I asked it to come up with one for “AI euphemisms that cover-up the truth.” It answered:

“Alternative explanations that soften reality.”

That one’s actually pretty accurate.

[This essay originally appeared at Spiritual Telegraph]