We talk about what AI might do to the future without noting that it’s already taking control of our past.
New Scientist reports that about 1 in 20 Wikipedia pages contain AI-written content. The number is probably much higher, considering how long LLMs have been scraping and preserving online content without discriminating its human or machine origins.
Those LLMs then use Wikipedia to further train their models.
There’s no need to worry, according to an expert on Wikipedia quoted in the New Scientist article, as the site’s tradition of human-centric editing and monitoring will devise new ways to detect AI-generated content and at least ensure its accuracy.
They will fail or, more likely, the battle has already been lost.
We have been conditioned to prefer information that’s vetted anonymously and/or broadly, especially when it comes to history. Groups of people who once controlled analyses and presentation of the past were inherently biased, or so the so-called logic goes, and their expertise blinded them to details and conclusions that we expected to see in their work.
History wasn’t customer friendly.
So, the Internet gave us access to an amorphous crowd that would somehow aggregate and organize stuff and, if we didn’t like what they presented, gave us the capacity to add our two cents to the conversation.
Wikipedia formalized this theology into a service that draws on a million users as editors, though little more than a tenth of them make regular contributions (according to Wikipedia). Most studies suggest that their work is pretty accurate, at least no less so than the output from the experts they displaced.
But isn’t that because they rely on that output in the first place?
Wiki’s innovation, like file sharing in the 90s, is less about substance than presentation and availability.
Now, consider that AI is being used to generate the content on which Wiki’s lay editors use to form their opinions, even providing them with attribution to sources that appear reputable. Imagine those AI insights aren’t always right, or completely so, but since they can get generated and shared by machines much more quickly than humans could do it, the content seems broadly consistent.
Imagine that this propagation of content is purposefully false, so that a preponderance of sites emerge that claim the Holocaust didn’t happen, or that sugar is good for you. It won’t matter whether humans or machines are behind such campaigns, because there’s no way we’ll ever know.
And then meet the new Wikibot that passes the Turing Test with ease and makes changes to the relevant Wiki pages (ChatGPT probably passed it last year). Other Wikibots concur while the remaining human editors don’t see reason to disagree, either with the secretly artificial editors or their cited sources.
None of this requires a big invention or leap of faith.
AI already controls our access to information online, as everything we’re shown on our phones and computer screens is curated by algorithms intended to nudge us toward a particular opinion, purchase decision and, most of all, a need to return to said screens for subsequent direction.
I wonder how much of our history, especially the most recent years in which we’ve begun to transition to a society in which our intelligence and agency are outsourced to AI, will survive over time.
Will AI-written history mention that we human beings weren’t necessarily happy losing our jobs to machines? Will it describe how AI destroyed entire industries while enriching a very few beyond belief?
Will we be able to turn to it for remembrance of all the things we lost when it took over our lives, or just find happyspeak entries that repeat the sales pitches of benefits from its promoters?
In 1984, George Orwell wrote:
“Who controls the past controls the future. Who controls the present controls the past.”
I wonder if AI will make that quote searchable 20 years from now.
[This essay appeared originally at Spiritual Telegraph]