Advances in AI suggest it may only be a matter of time before a machine-generated novel is of sufficient quality to dazzle the Booker Prize judges, or even render human novelists obsolete. But here’s why that (probably) won’t be the case

Written by Ian Leslie

Publication date and time: Published

It’s November, 2040, and the winner of the Booker Prize is about to be announced. A hush falls upon the assembled guests at London’s Roundhouse, as the venerable novelist Sally Rooney, now a two-time winner of the prize, opens the golden envelope. Her eyes widen as she reads what is written on the slip of paper. She looks up at the audience, and declares, unsmilingly, ‘It’s game over. The machine has won.’

You might say this is unrealistic, and I might agree with you – it should probably be 2030. But whenever it happens, there will come a time when human novelists are competing with AI novelists. AIs will learn to write novels that are as compelling as those by humans but produced within a few seconds rather than a few years. At which point, the long game which began with – well, take your pick of Cervantes or Defoe or Shikibu – will draw to a close. Novelists will go the way of typists and telegraphists, town criers and troubadours: masters of an obsolete skill.

Do I really believe this? I’m not sure I do, but let’s press the case for it. Novels are made of words, manipulated by an author. A form of modern AI known as Large Language Models (LLMs), is now rather good at manipulating words (for the reader’s peace of mind, this is not one of those articles that will end with the revelation that it was artificially generated). LLMs are fed human texts: websites, Wikipedia entries, academic articles, whole books; the LLM known as ChatGPT has swallowed the entire internet. LLMs then look for patterns in all these words: which verbs go with which nouns, which names get attached to which topics, which sentences tend to go with which sentences, which paragraphs with which paragraphs. It learns how to predict the most plausible next word and next paragraph. When you ask ChatGPT a question, it matches the words of your question with the words it has learned to associate them with.

This sounds pretty basic, but if you feed the machine enough data the result is impressive. The latest version of ChatGPT, ChatGPT-4, can instantly generate a Shakespearean (in the, er, broadest sense of the term) sonnet on climate change. In fact, it has a passable grasp of most well-known writers. I just asked it to write a scene in which Georges Simenon’s Inspector Maigret takes a trip to the gym. It did a creditable job (‘As the session ended, Maigret left “Le Corps Harmonieux” with a slight smile on his lips…his investigation of human nature had found a new playground.’).

I suspect that ChatGPT-4 achieved this feat based on its ingestion of online texts about Simenon, rather than Simenon’s novels. What if it was fed Simenon’s entire oeuvre? (Which, by the way, would probably take up more terabytes than the entire internet; in his long life, Simenon seems to have only ever stopped writing to have sex and perhaps not even for that). Then, ChatGPT, or one of its successors – remember, this technology is improving at a fast rate – might be able to generate a Simenon pastiche so sophisticated as to be indistinguishable from an original, providing there are no gym sessions in it.

Robot reading a book

As readers, we would simply give the AI on our screen a few prompts and generate new novels at will, without ever having to actually buy one from a human

Ah but, you might say, Simenon was a genre writer, albeit a brilliant one, and by definition, genres follow certain patterns, even as they vary them. A detective story will usually involve a mysterious crime, some bungling cops, and a solution discovered thanks to the superior perceptions of our hero. Simenon created a sub-genre unto itself, peopled by introverted, odd men in the grip of murderous passions for femmes fatales. Since the whole job of an LLM is to detect patterns and replicate them, genre novels are the low-hanging fruit when it comes to fiction, aren’t they? The Booker Prize is for literary fiction – for novels that by their very nature do not follow patterns, and which defy our predictions about which words go with which words.

This is a crude distinction and I’m going to sidestep – in fact I’m going to give the mother of all wide berths to – the question of whether genre fiction is in some way inferior to literary fiction (I should add that the Booker Prize itself does not employ the term ‘literary fiction’ at all). I will  just note that ‘literary’ authors follow patterns too, because people do. Consider this: if we fed an LLM all the novels of Kazuo Ishiguro, in all their glorious variety, would it not pick up on a few commonalities? Would it not learn to replicate that cool, plain style; to create a mysteriously hindered narrator who has difficulties with communication; to introduce themes of memory, self-deception, alienation?

Of course, nobody is more alert to such questions than Ishiguro, who has deliberately resisted conformity to predictable patterns, including his own, his whole career. But every distinguished author has a fictional signature. We value literary novelists for their ability to surprise us and also for their consistency - for their voice. A consistent voice is, by definition, replicable to some degree. As a character in Ishiguro’s latest, Booker-longlisted novel, Klara and the Sun, which is itself partly about artificial intelligence (I should stress that Ishiguro claims to have written himself) puts it, ‘Any work we do brands us…and sometimes brands us unjustly.’

The AI which wins the 2040 Booker Prize will obviously not pose as Kazuo Ishiguro or any other particular author, not least for legal reasons, but it could write a novel that feels original and which isn’t identifiable as the output of any single author’s training set. Let’s say I asked ChatGPT-57 (or whatever) to generate a novel about a female terrorist set in Chicago, 1968, which blends elements of Hilary Mantel, Cormac McCarthy, and Fyodor Dostoevsky. That might sound like a weird and unmanageable request, but plenty of novels have a more diverse mix of components and progenitors. What we call originality comes from the smashing together of existing stories, ideas, forms, and styles in weird new combinations.

Once it gets the knack of generating novels that feel, well, novel, we can imagine an AI-generated effort being submitted to the Booker judging panel under a pseudonym, and winning – perhaps with its non-human identity only being revealed at the last minute. At this point, novelists would become obsolete. As readers, we would simply give the AI on our screen a few prompts and generate new novels at will, without ever having to actually buy one from a human. No wonder Sally Rooney is so stony-faced at the podium.

Kazuo Ishiguro

LLMs are backward-looking. They are derivative, even when creating new combinations; a giant rear-view mirror. They produce imitations of text generated by humans, and those imitations are, in the jargon of information technology, lossy

I don’t, however, think this is likely, for two fundamental reasons. The first is that LLMs are backward-looking. They are derivative, even when creating new combinations; a giant rear-view mirror. They produce imitations of text generated by humans, and those imitations are, in the jargon of information technology, ‘lossy’. The science fiction writer Ted Chiang describes ChatGPT as a ‘blurry JPEG’ of all the text on the internet. If you train a music model on Mozart, you get Salieri, at best (our idea of Salieri is itself a very lossy version of the real composer, but that’s another story). My AI-generated Ishiguro novel will only be Ishiguro-ish.

Now, what happens when the next generation of models is trained on those lossy versions, and so on and so on? Eventually, everything degrades to garbage. We reach the stage of what some computers scientists have termed ‘model collapse’. To stop or at least slow this process we will need novelists, and other artists, to keep refreshing the supply of fresh ideas, and to keep finding new ways to manipulate language in order to reflect new experiences.

I can imagine novelists leaning on AI to help them write. Many writers hate generating first drafts. Indeed, what’s known as ‘writer’s block’ is almost synonymous with ‘fear of the first draft’. So perhaps they will be able to prompt an LLM to work something up which they can then improve and make their own. No doubt some novelists will find this a tremendously useful function but I would be wary of outsourcing this onerous task. The pain of writing a first draft is really the pain that accompanies hard thinking. The bleak truth is that there are probably no shortcuts to a decent second draft.

The second reason I don’t believe AIs will substitute for really good novelists is that there will always be readers who seek questions as well as answers, and as Pablo Picasso remarked (yes, this is a rare example of an authentic Picasso quote), ‘Computers are useless. They can only give you answers.’ There is a useful distinction between puzzles and mysteries. A crossword puzzle is solved with information – when you get the final piece of information, the puzzle is over. A detective like Maigret or Poirot solves a puzzle by finding the right information. At the end of the book, the reader’s curiosity is satisfied. A mystery, by contrast, is only deepened by more information. The nature of existence is a mystery, so is the nature of desire or love; the more we find out about these things, the more they stump us, and the more we are fascinated by them. 

LLMs only provide answers. They write what is already known, since they are parasitical upon the existing corpus of knowledge. What’s more, in their case, all they ‘know’ is words. They don’t know the sensation of walking barefoot on grass, or opening a suitcase, or feeling hot with desire. Neither do we, really – we don’t comprehensively know these experiences in all their fullness even after we’ve had them. But being alive in the world enables us to get a strong sniff of them - which means we can capture the world in our net of words with more fidelity than LLMs can.

LLMs are puzzle-solvers, but good novels are mysteries. As readers, we love to talk about what the green light meant to Gatsby, but, or rather because, we’ll never know. Rather than replicating and shuffling our settled notions of reality, good novels subvert and disrupt them. By expanding our perception, novelists draw attention to its distortions, flaws and limits. Novelists are engaged in a quixotic struggle to crystallise what humans don’t know and cannot grasp. As Italo Calvino put it, ‘We always write about something we don’t know: we write to make it possible for the unwritten world to express itself through us.’ Writing focuses our minds on what lies on the other side of words, he says, from where something is trying to emerge, ‘like tapping on a prison wall’. I suspect Calvino would have found AI interesting, but only mildly so. It doesn’t even know it’s in a prison.

Ian Leslie writes about culture, politics and psychology in his Substack newsletter, The Ruffian.
 

Sally Rooney