Researchers have discovered that ChatGPT has a liberal bias when it comes to politics. They revealed less about the AI’s performance and more about how complicated it is to talk about bias.

The project was conducted at the UK’s University of East Anglia and asked the AI to answer questions on topical issues as if it were a liberal voter in the US, UK, and Brazil, then then again as an agnostic respondent. The two sets of answers didn’t vary much.

Proof of bias? Absolutely.

The bias starts with the assumption that we can define bias.

I think biases are predetermined opinions or ways of processing information. They’re not inherently wrong, as a bias toward established, verified facts is probably a good thing. But most people believe that facts are themselves biased, so there’s no such thing as an unbiased source of information.

It all comes down to what data are included in any decision-making process, and how that data is weighted and therefore considered. The answers you get from AI are dependent on not just what questions you ask of it but how you ask them.

Biases are structural, as are our opinions of them.

While the UK researchers found that ChatGPT leaned liberal, its analysis of models developed by Google were biased toward conservative opinions. Facebook’s LLaMA seemed more authoritarian and right wing, too.

It turns out that each model relied on different data sets, with the conservative-leaning AIs using books and the liberal AI relying more on Internet data and social media comments.

Remember, LLMs aren’t truth engines but rather sophisticated tools for completing sentences, so they are quite literally bound by the scope of their data. But that’s also the rub: if ChatGPT concluded that climate change was real and man-made, was that because it leaned left or stated a fact? Ditto for a Google conclusion, say, that social change should be slowed and considered, which sounds conservative but is also born out by lots of history?

Our biases affect how we perceive biases.

I know, I know, there’s no such thing as truth. Each of us has our own set of truths that are sacrosanct despite whatever external influences might suggest otherwise.

Overlay that belief on coding, queries, and analyses of AI and you get a wildly diverse array of interpretations of its biases that vary based on expectations, geographic location, age, gender, political persuasion, religious belief, physical well-being, even the current weather conditions and time of day.

Good luck sorting it out to the satisfaction of everyone, everywhere, every time. Sounds like a make-work scheme for lots of well-intentioned people.

This leads to perhaps the biggest bias of them all.

Many of us are tired of doing battle with one another. Our wants and needs chafe in every instance where we’d hope they’d find tolerance, if not resolution. Our public discourse is paralyzed and angry, our private conversations dominated by suspicions and fears.

Want want AI to sort things out for us.

Somehow, robots can become pure entities, unencumbered by the imperfections that cloud human judgement and render perfectly just and fair answers to every question. We’ll preclude their biases and correct them should they occur. They’ll listen and obey when we don’t.

It’s a tech wet dream.

We can and should strive to ensure that AIs aren’t overly stupid or cruel, but the premise that we can build machines that don’t just operate efficiently but provide answers that please our ever-changing opinions about bias and truth is a fool’s errand.

There’s no technology solution for bias. It’s just another bias.

[This essay was originally published on Spiritual Telegraph]