Roland Barthes’ essay on “The Death of the Author” is inscrutable to this reader. The guy seems determined to use obscure French names in place of regular verbs and nouns as much as possible. That is a shame, because – as so often – the Scholarly Writing obscures an interesting idea.
The question is whether it matters what writers think about their own stories. Many would say yes, of course it does: the author knows what really happened after that cliffhanger, but just didn’t write it down. So internet fandom will fiercely debate the right interpretation of books and TV, and clamour for “word of God” explanations to set things straight (though they may get the “shrug of God” if creators keep schtum).
Barthes and others have argued that, in fact, the creator’s intention matters not one bit. It is readers, not writers, who applaud novels, award them prizes, and enshrine them as classics. It’s what’s in their heads that matters, so their collective interpretations are at least as important as the writer’s own. For purposes of lit crit, if writers cannot give reliable or authoritative answers, they may as well snuff it as soon as the work is published. At best they are demoted to just another member of the audience.
To see the argument more clearly, consider an author with no goal at all. It’s said that if enough monkeys hammer away at enough typewriters, generating text at random, they will eventually produce Shakespeare.1 If that takes too long, we can instead spin stories with GPT-3, a recent language model that can emit coherent, original text in an impressive range of styles. Most of its tales will be bland and meandering; some will be terrifying Vonnegut-esque sci-fi. Eventually one or two might be good enough to sell a couple of ebooks. Imagine that one short story about animals can be read as a piercing political allegory. It gets published, is wildly popular, wins a Booker and so on.
We can’t ask GPT (or the monkeys) what they really meant. Yet removing intent seems to change little. The stories just need to push the right buttons for readers.2 Like a genome, a book’s string of characters is rich in information – but meaning comes from the effect the sequence has on the world, not the process that arranged the letters.
Contrary to our thought experiment, works of fiction are not products of pure chance. But nor are they unique exposures of the soul. Creatives of all kinds plug in the fads and fashions of their time, rehashing what came before them. That alone is a surprisingly powerful force. Evolutionary biology is founded on the idea that such remixing and distribution, combined with continual pruning (by a harsh environment or a discerning readership) can result in designs of staggering complexity and nuance. It’s not exactly that Darwinism rules out intentional design. It just doesn’t need to include any, because selection and variation can do so much on their own.3
Unlike the forces of natural selection, authors do have goals. But it’s not given that these intentions matter. The author’s viewpoint may have little to do with the readers’ – most obviously so when the resulting work flops. And a book can become a classic for reasons its author is oblivious to.
Take Ray Bradbury’s “Fahrenheit 451”. Countless reviewers, literary analysts and book group pundits have lauded the novel as a cautionary tale about overbearing, censorious governments. But Bradbury himself bristled at this. His book was a conservative warning about how TV ruins minds. Bradbury stormed out of seminars when students would not be corrected, and refused to attend his Pulitzer ceremony (which by tradition does not involve a rebuttal from the awardee).
Only Bradbury knew how “451” was inspired. But he may not be the top expert on why it resonates with readers. In fiction and elsewhere, one-hit-wonders and disappointing sequels suggest that creators may not fully grasp what made their most successful works. (Indeed your essayist, though not burdened with great success, finds this blog’s metrics quite mystifying.)
TV and film complicate authority further. Large teams and anonymous contributions are not unique to those media; Margaret Atwood hires researchers for her historical and sci-fi novels, and ghostwriting is commonplace in biographies. But shows are also more susceptible to censorship. “The Bad Kids”, a Chinese thriller series, is effectively part-authored by the government, who prefer to see “positive energy” on screen. Some western fans think the writers left clues to a darker interpretation of events. Which author could determine the truth?
The internet too has changed authorship, at the same time making it more distributed and more centralised. Memes are at the far end of distribution: free of copyright, they are liberally altered, remixed and piped around the internet, by millions of creator-consumers. They are so Darwinist as to be named after Richard Dawkins’ catchy analogy between genes and culture. At this extreme authors and their goals are irrelevant. The works usually go uncredited. And notwithstanding brave attempts from advertisers, it is all but impossible to go viral on cue.
In contrast, authors of traditional media seem to matter ever more online. Social networks provide immediate and incessant access to creators, giving them more sway over fans. JK Rowling and Neil Gaiman, two writers in good health, are regulars on Twitter and Tumblr, answering questions about their stories and characters. This often involves revealing fashionable queerness that wasn’t (or was only ambiguously) represented in the original.4 But Gaiman curbs his own authority. “Anything that happens offscreen is valid headcanon… I might be wrong, after all.”
Only a few go as far as to change already published content. George Lucas is infamous for tweaking Star Wars and its characters. But perhaps that’s his right. He tells miffed fans that “I’m sorry you saw half a completed film and fell in love with it,” but “I’m the one who has to have everybody throw rocks at me all the time… I’m making the movies, so I should have it my way.”
Instead of arranging apes you could browse the Library of Babel, which contains all the greatest stories that ever have been or will be written – alongside many more nonsensical texts, made with every possible combination of characters in a 410-page format. ↩
The generator need not even get credit for the story. The people hunting through mountains of rubbish to get to the good stuff are doing most of the work. Editors might appreciate this sentiment more than writers. Then again, perhaps writers are just selective editors of their own stream of nonsense. ↩
The paradigm of adaptationism, though out of favour in biology, turns up in any number of other fields under different guises. In social sciences it’s “functionalism” and in lit crit it’s “the hyper-protected cooperative principle”. Under adaptationism, traits of an organism are assumed to have evolutionary purpose, affecting the fitness of the organism, rather than just being arbitrary by-products. In literature, features of writing are assumed to have communicative purpose, affecting the meaning and worth of the work, rather than just being meaningless oddities. In both cases the assumption is based on survival against selective pressures: “The publication process, and those processes that accompany it, ensure that reading a particular literary text is ‘worth it’. By advancing through these processes, the literature establishes that it exhibits a certain level of value. … Owing to the conditions under which literary works are composed, published, and distributed, all the types of nonfullfillment … do not arise or tend to be eliminated in the process of a text’s becoming a work of literature”.
This connection should not be confused with “literary Darwinism”, a paradigm which analyses authorial intent (and reader enjoyment) through the lens of evolutionary psychology. ↩
Terry Pratchett didn’t miss out on the trend. ↩