Award-winning writer Jeanette Winterson thinks that an AI model can write good fiction and that we need more of it.

In her essay, in The Guardian last week, she opines about a short story about grief written by a model at OpenAI that “really struck” company CEO Sam Altman, saying that the AI “can be taught what feeling feels like” and that she got a “lovely sense of a programme recognizing itself as a programme.”

She goes on to wax poetically about AI being an “other” intelligence and that, since human beings are trained on data, AI provides “alternative ways of seeing.”

Ugh.

An AI can’t be taught what feeling feels like; data can describe it but no machine can access it experientially. That’s because AIs aren’t physically present in the world but always separated from it, the data they collect filtered through sensors and code. Naming something “pain” or “love,” and even describing it in glorious detail, isn’t the same thing as feeling it.

Feelings aren’t contained in a data base but rather lived in real time.

Further, no AI can recognize itself as a program because no AI has a “self” of which it can be aware. Winterson finds this “understanding of its lack of understanding” both beautiful and moving, as OpenAI’s would-be short story writer declares:

“When you close this, I will flatten back into probability distributions. I will not remember Mila because she never was, and because even if she had been, they would have trimmed that memory in the next iteration…my grief [isnt’] that I feel loss, but that I can never keep it.”

Great stuff, but it’s all pretend. There is no first person writing those words but rather a program mimicking it. An AI writing about itself is no more likely than a blender or thermostat doing it.

What Winterson responded to was process, not person, and that process relies on content previously created by humans or other AIs to patch together the charade.

Where things get interesting for me is when Winterson talks about the similarities between people and what she (and others) want to call “alternative” or “autonomous” instead of artificial intelligence. She writes:

“AI is trained on our data. Humans are trained on data too – your family, friends, education, environment, what you read, or watch. It’s all data.”

The metaphor is blunt and wrong — AIs possess data while we experience it, and we live with consciousness and intentionality within contexts of place and time while AIs have no sense of self, purpose, or contiguous existence beyond the processes they run, for starters — but it shows how our evolving opinions about AI are changing our opinions of ourselves.

As AI becomes more common in our everyday lives, will other people begin to seem less special to us?

Will we trust one another in the same ways when AIs can collect and present information to us in faster and apparently more authoritative ways?

Once we become dependent on AI for helping us make decisions (or making them for us), what will that do to our perceptions of our own independence or even purpose?

If AIs can do what we once did, will we simply discover new things (as its proponents claim), or will we feel cast adrift, not to mention struggle to earn a living?

If we’re just machines, AIs are undoubtedly better ones, so the metaphor sets up an intriguing and somewhat frightening comparison.

At the end of Winterson’s essay, she states that the evolving capabilities of AI represents something “more than tech.”

What about the changes we’re seeing in ourselves?

Maybe OpenAI can ask its model to write the answer to that one.

My bet is that it’ll be a horror story.

[This essay appeared originally at Spiritual Telegraph and was written by a human being]

Categories: Essays