The headline of a recent article at TechCrunch declared that an AI thinks in Chinese sometimes, though its coders can’t explain it.
It was misleading, at best, and otherwise just wrong.
The essay, “Open AI’s AI reasoning model ‘thinks’ in Chinese sometimes and no one really knows why,” recounted instances where the company’s super-charged GPT model (called “o1”) would occasionally incorporate Chinese, Persian, and other languages when crunching data in response to queries posed in English.
The company provided no explanation, which lead outsiders to ponder the possibilities that the model was going beyond its coded remit and purposefully choosing its language(s) on its own. The essay’s author closed the story with an admittedly great sentence:
“Short of an answer from OpenAI, we’re left to muse about why o1 thinks of songs in French but synthetic biology in Mandarin.”
There’s a slight problem lurking behind the article’s breezy excitement, though:
AIs don’t think.
The most advanced AI models process data according to how those processes are coded, and they’re dependent on the physical structure of their wiring. They’re machines that do things without any awareness of what they’re doing, no presence past the conduct of those actions.
AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.
But that inaccurate headline sets the stage for a story, and subsequent belief, that AIs are choosing to do things on their own. Again, that’s a misnomer, at best, since even the most tantalizing examples of AIs accomplishing tasks in novel ways are the result of the alchemy of how they’ve been coded and built, however inscrutable it might appear.
Airplanes evidence novel behaviors in flight, as do rockets, in ways that threaten to defy explanation. So do markets and people’s health. But we never suggest that these surprises are the result of anything more than our lack of visibility into the cause(s), even if we fail to reach conclusive proof.
And the combination of data, how it’s labeled (annotated) and parsed (tokens), and then poured through the tangle of neural networks is a strange alchemy indeed.
But it’s science, not magic.
The causes for the use of different languages could be simply an artifact of the varied data sources on which the model is trained. It could also be due to which language provides relevant data in the most economical ways.
A research scientist noted in the article that there’s no way to know for certain what’s going on, “due to how opaque these models are.”
And that’s the rub.
OpenAI, along with its competitors, are in a race not only to build machines that will appear to make decisions as if they were human, though supposedly more accurately and reliably than us, but to thereafter make our lives dependent on them.
That means conditioning us to believe that AI can do things like think and reason, even if they can’t (and/or may never do), and claiming evidence of miracle-like outcomes with an explanatory shrug.
It’s marketing hype intended to promote OpenAI’s o1 models as the thoughful smarter cousins of its existing tools.
But what if all they’re doing is selling a better blender?
[This essay appeared originally at Spiritual Telegraph]
0 Comments