You’ve probably heard by now that a computing platform called “ChatGPT” can converse like a human being, thereby signaling the arrival of Artificial Intelligence.
Only AI has been around for a long time and it’s not particularly intelligent.
ChatGPT works like a light switch, albeit a complicated one.
The challenge is that we have no established understanding of what “intelligence” even means, at least when it comes to how we human beings possess or lack it. The dictionary says it’s “the capacity to acquire, understand, and use knowledge,” which means that every machine we’ve invented since the chariot wheel has had some intelligence to it, not just us. The very premise of technology is imbued with it — we build devices to do things for us in particular situations — so they’re all intelligent, artificially so.
Anti-lock brakes. Thermostats. Light switches. All of them operate under the guidance of acquiring, understanding (assessing), and then using knowledge (which we can call data, or simply observed aspects of experience).
Intelligence is the capacity to actuate this simple equation: “If this happens, do that.”
By this definition, animals and insects are intelligent. So are plants, evidenced at least every time they move to follow the sun. The genetic code that underlies every living thing is stuffed with intelligence that allows it to shape arms, paws, and stems.
Gravity, electromagnetism, and the other basic forces of nature are intelligent. Electrons observe charge and move accordingly. Heat moves from one state to another. Even black holes operate under the guidance of “if this happens, do that.”
The universe is intelligent. It’s a profound idea.
Where the conversation gets confusing is when we equate such intelligence with our human experience of consciousness, and our presence in our emotions.
We have no proof that other living things are self-aware like we are. While some animals may evidence emotions, as any pet owner will assert, it’s also possible we’re projecting ours onto them. Certainly, rocks don’t know they’re rocks. The Higgs-Boson particle is never happy or sad.
Human beings draw on our consciousness and emotions to form and make decisions. It makes our intelligence a different, er, animal than other forms of it, organic or artificial. Interestingly, we don’t even know for sure that we’re all conscious in the same way (or at all). I know that I am.
I think.
But what’s certain is that ChatGPT is neither conscious nor does it possess emotions. Technologists might say mimicry is the same thing as experience, but they’re confusing a metaphor with an explanation. Alan Turing’s test wasn’t a proof of human-ness but an admission that we couldn’t quite define it.
ChatGPT can sense its situation and make decisions based on its programming and the examples it collects from experience. Again, experts will confuse things with references to how this gets done, but it’s ultimately a complicated “if this happens, do that” device. A very complicated one.
When it appears to converse like a person, it’s just doing a good job of obeying its code. Like a squirrel bound by its instincts and last night’s owl attack. Or a light switch that “knows” to turn on or off.
ChatGPT is still a big deal, though not for the reasons anybody mentions:
First, its conversational and adaptive qualities should allow us to apply it to lots of decision-making that is currently done by people.
It’s important to remember that people are inherently biased, ill-informed, and unreliable, so the idea that we could rely on an AI that is less of that is truly exciting. The problem is that those same people are coding it, so what’s to say it doesn’t just automate their imperfections?
Second, it reveals how many of our experiences are reducible into sets of “if this happens, do that” commands and there are lots of them. The tech demonstrated by ChatGPT will help speed the use of AI to replace people and do their jobs…perhaps hundreds of millions of them within a decade, according to one reputable study.
Also, there’s the coding challenge: we imperfect humans possess a seeming endless capacity to find and use inputs into our decision making that we ourselves often don’t even know we possess. Can coders ever imagine, let alone map all of them?
Consider this: An AI driving a car faces existential questions as it approaches an unavoidable accident. Will a machine with no sense of self or childhood memories make a “better” decision than a human driver drawing on a lifetime of varied and nuanced experience (combined with her or his intelligence and emotions).
Maybe. Maybe not.
Third, a community of companies is posed to make zillions selling AI and they’re pretty much dictating the terms of the conversation. Why do we let them do that? Every day that passes, every hour, minute, second, or derivative thereof, their algorithms are improved as they collect more data and crunch it.
ChatGPT isn’t a news story. It’s a marketing campaign and the folks behind it (and other AI tools) are on a relentless march toward a future that we don’t understand because we don’t really talk about it. And they’re OK with that.
ChatGPT isn’t going to lead to the destruction of the world, but it could well presage a radical rewiring of how we work and live.
It’s a glorified light switch, and we should be debating whether or not we want to flip it on.