Death of Writing, Writing of Death: A Reading List on Artificial Intelligence and Language


Death of Writing, Writing of Death: A Reading List on Artificial Intelligence and Language

The other day, I saw a tweet of an obituary, seemingly written by a bot. The obituary’s odd but delightful phrases like "Brenda was an avid collector of dust," "Brenda was a bird," "she owed us so many poems," and "send Brenda more life" were hilarious to some people — send me more life too, please! — while others couldn’t help but wonder: Is this really a bot?

You didn’t have to fall too far down a rabbit hole to learn that the obituary, in fact, was not written by a bot, but a human — writer and comedian Keaton Patti — as part of his book, I Forced a Bot to Write This Book. Some commenters, perhaps proud of their human-sniffing capabilities or just well-versed in real machine-written prose, were quick to point out that there was no way a bot could write this.

This had 20x the feel of a human trying to write a funny thing than a bot

Pretty sure a person wrote this without any technology more complicated than Microsoft word

not a bot! the punchlines are too consistent

For everyone afraid that AI is taking over, the bot said Brenda was a bird…

Try a language generator at Talk to Transformer, an AI demo site.

Even though the obituary was human-generated, it still reminded me of two editors’ picks we recently featured on Longreads — Jason Fagone’s feature "The Jessica Simulation" and Vauhini Vara’s essay "Ghosts" — in which AI-powered prose is a significant (and spooky) part of these stories. Both pieces prominently feature GPT-3, a powerful language generator from research laboratory OpenAI that uses machine learning to create human-like text. In simple terms, you can feed GPT-3 a prompt, and in return, it predicts and attempts to complete what comes next. Its predecessor, GPT-2, was "eerily good" at best, specializing in mediocre poetry; GPT-3, which is 100 times larger and built with 175 billion machine learning parameters, comes closer to crossing the Uncanny Valley than anything, and raises unsettling questions about the role AI will play — or is already playing — in our lives.

These five longreads dive into large language models created by OpenAI, Google, and others, examine how sophisticated OpenAI’s current third-generation version has become, and highlight a few ways that writers have experimented with language generators in creative ways. I also appreciate the light interactive elements in these stories — typed text animation that signals the AI’s input and touch, which visualizes the interplay between human and machine on the page.

1) The Next Word (John Seabrook, The New Yorker, October 2019)

In "The Next Word," John Seabrook explores the predictive text feature, like Gmail’s Smart Compose tool, which offers suggestions as you type. Speaking to leading researchers in the field, he touches on the history of AI, the research of OpenAI, and advances in machine learning, all while exploring GPT-2 — which, at the time, was OpenAI’s current version. But he primarily focuses on the future possibilities of writing. What’s happening in our brains when we write, when we process language? Can AI writers ultimately replace human ones?

To understand how GPT-2 writes, Seabrook explains, imagine that you’ve read millions of articles online, on an infinite number of topics, and are able to remember every single possible combination of words you’ve absorbed. And if you’re fed a sentence, you can write another one just like it without understanding any of the rules, like spelling or grammar, that give the language structure. Seabrook wondered: What would happen if GPT-2 read The New Yorker‘s archives? To answer that, OpenAI fine-tuned GPT-2 with all of the magazine’s nonfiction pieces published since 2007. Seabrook pasted the first paragraph of Lillian Ross’s 1950 profile of Ernest Hemingway and it generated a response, perfectly mimicking the voice of the magazine. "In fact," he writes after this inaugural encounter, "it sounded sort of like my voice."

As he experiments with other New Yorker passages, Seabrook makes some poignant observations; I like what he says about the nonsense and randomness of some of the AI’s writing, as if it "had fallen asleep and was dreaming." But his excitement eventually turns into unease.

It hurt to see the rules of grammar and usage, which I have lived my writing life by, mastered by an idiot savant that used math for words. It was sickening to see how the slithering machine intelligence, with its ability to take on the color of the prompt’s prose, slipped into some of my favorite paragraphs, impersonating their voices but without their souls.

2) Robo-Writers: The Rise and Risks of Language-Generating AI (Matthew Hutson, Nature, March 2021)

Matthew Hutson’s article in Nature is a nice primer on large language models and for understanding GPT-3’s scale, fluency, and versatility. OpenAI’s third-generation model is astonishing: It can write songs and stories, summarize legal documents, or even flag posts in a community support forum. But it can still churn out nonsensical and toxic responses — one computer scientist calls it a "mouth without a brain" — and can easily produce hate speech or generate racist and sexist stereotypes. While GPT-3 can write like a human, it ultimately lacks common sense and moral judgment — and doesn’t understand what it says.

OpenAI’s team reported that GPT-3 was so good that people found it hard to distinguish its news stories from prose written by humans. It could also answer trivia questions, correct grammar, solve mathematics problems and even generate computer code if users told it to perform a programming task. Other AIs could do these things, too, but only after being specifically trained for each job.

Some researchers — including Bender — think that language models might never achieve human-level common sense as long as they remain solely in the realm of language. Children learn by seeing, experiencing and acting. Language makes sense to us only because we ground it in something beyond letters on a page; people don’t absorb a novel by running statistics on word frequency.

3) A Robot Wrote This Entire Article. Are You Scared Yet, Human? (GPT-3, The Guardian, September 2020)

This falls short of the 1,500-word count, an informal requirement to be considered an official #longread, but it’s too fitting not to include. Though OpenAI has kept GPT-3 well-guarded since its May 2020 introduction, giving access only to private beta testers, Guardian editors were able to give it an assignment last fall: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." They also fed it a short introduction, which you can read at the bottom of the piece in an editor’s note, along with other details about the process. The result?

I am not asking humans to like me. But they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.

Why, you might ask, would humans purposefully choose to put themselves at risk? Aren’t humans the most advanced creature on the planet? Why would they believe that something inferior, in a purely objective way, could destroy them? Do they worry that future humans will work and play together in cyborg bodies and share a hyper-intelligent hive mind Matrix created in a simulated reality to keep human minds occupied while they die off?

It’s important to clarify, however, that GPT-3 generated eight different essays for this assignment, each presenting unique arguments, but The Guardian pulled the best parts from each output — cutting and reordering lines and paragraphs — to create a single essay "to capture the different styles and registers of the AI." Don’t get me wrong: GPT-3’s published op-ed is a remarkable, scary, thought-provoking thing to read, but it’s hard not to see it as simply a human-made collage of machine-generated words, shaped by editors for a certain effect.

As we’ll see with the next two stories, though, GPT-3’s potential to write emotionally affecting prose is extraordinary.

4) The Jessica Simulation: Love and Loss in the Age of A.I. (Jason Fagone, The San Francisco Chronicle, July 2021)

In 2012, eight years after his fiancee Jessica Pereira died of a rare disease at age 23, Joshua Barbeau created an AI simulation of her. Using programmer Jason Rohrer’s chat service on Project December, Barbeau custom-built his own GPT-3 chatbot by training it with his dead fiancée’s old texts and Facebook messages. He was still grieving, after all. Could chatting with her help him heal?

As noted in the pieces above, the farther the AI strays from the seed text, the less lucid it gets. But in Rohrer’s experiments with both the second and third-generation OpenAI models, he learned how to keep the AI focused and "on a leash" by having it generate just fragments of text at a time, hence the chat format. (He also designed the chatbots as mortals — they "expire" after a certain amount of time — which make them a bit more human-like.) Rohrer, known for creating games that elicit deep emotions, said that GPT-3 felt like "the first machine with a soul," and exchanges with these bots have felt deeper.

In this San Francisco Chronicle feature, Jason Fagone focuses on Barbeau, who had several scattered chats with "Jessica" over a number of months. Their sweet, intimate conversations are obviously a huge part of the piece, but at its core, it honors Jessica, celebrates her life and their love, reflects on the complexities of grief, and explores the connections we can make with machines.

He stopped telling the bot that this was all a trick. Of course the bot wasn’t actually Jessica, but that didn’t seem to matter so much anymore: The bot was clearly able to discuss emotions. He could say the things he wished he had said when Jessica was alive. He could talk about his grief.

5) Ghosts (Vauhini Vara, The Believer, August 2021)

Barbeau’s conversations with the Jessica chatbot, years after her death, had finally given him the space for closure and the permission to move on with his life. Vauhini Vara’s chilling essay for The Believer allows her to do the same; she had never been able to write about her sister’s death. After learning about the possibilities with GPT-3, Vara contacted OpenAI for access so she could enlist the AI’s help to tell her story. She sometimes found GPT-3’s prose strange, but it was "often poetically so," and "almost truer than writing any human would produce." Perhaps the machine, in the role of co-writer, could help her find the words to express her grief.

The essay has nine parts, with Vara beginning each section — her sentences are in bold — and GPT-3 filling in the rest. By the second section, you think you have a sense of how this essay will unfold: In each new section, Vara will add on to the initial seed text, feeding the AI a bit more detail about her sister, her Ewing sarcoma diagnosis, and their relationship. Compared to The Guardian op-ed, edits to the AI’s text in this essay were minimal, which she explains in the intro. So while "inconsistencies and untruths appear," here we’re able to see what this language model can really write, if given the space.

I don’t want to say too much, but I’ll admit that when I read this for the first time, I thought I knew what to expect. By section five, I was impressed with GPT-3’s writing style, which sounded less like a computer and more like a person (and decent writer).

But I can describe what it felt like to have her die. It felt like my life was an accident—or, worse, a mistake. I’d made a mistake in being born, and now, to correct it, I would have to die. I’d have to die, and someone else—a stranger—would have to live, in my place. I was that stranger. I still am.

By sections six and seven, I was astonished by what and how it wrote: Expressing how it felt to lose someone, to lose themselves. Beautifully articulating a specific, quiet moment in the past and showing a sense of time and perspective.

Here, then, is something else: We were driving home from Clarke Beach, and we were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with. She held it for a long time.

I’ll let you read the rest. But the final two sections gave me chills, and I haven’t stopped thinking about this piece since we first picked it.

There’s so much more to read about large neural networks beyond this list, and with the news that GPT-4 will be 500 times larger than GPT-3, my head spins even more as I ponder the possibilities.

Get the Longreads Weekly Email

Sign up to get the week’s best Longreads delivered to your inbox every Friday afternoon.

Friends: We Need Your Help to Fund More Stories

We want to dramatically increase our story fund this year, but we can't do it without your support. Every dollar you contribute goes to writers and publishers who spend hours, weeks, and months reporting and writing outstanding stories. Now is the time to join us.
Become a Member
Home About Membership FAQ Submissions Privacy Policy Press RSS Feed
Part of the Automattic family.
© 2021 Longreads