After I wrote a notebook for y’all in a burst of industry after the arrival of ChatGPT in the general consciousness last fall, I’ve had trouble looking directly at it. I think I have a terminally analogue mind, because the thought that automated thinking could pose any real threat to actual thinking (likewise art) seems preposterous to me. (I got an unexpected boost of support here from none other than Noam Chomsky.) Distorientingly, people who know a lot about it seem determined to dismiss one apocalyptic claim about where “generative” artificial intelligence (of which “large language models” like ChatGPT are a type) might be taking us, while embracing another. My favorite tech writer, Jaron Lanier, has said that he did not sign a big letter calling for a pause in AI development because it contributed to the mystification of it. He wrote in The New Yorker that AI, the whole internet actually, should “show its work” and be traceable to its human origins: “Big-model A.I. is made of people—and the way to open the black box is to reveal them.” (It’s not clear to me why he’s not implementing this humanistic approach in his own shop at Microsoft. Too low on the totem pole?) Two Princeton tech researchers told Semafor recently that they considered the threat of mass disinformation and “x-risk,” a.k.a., “existential risk,” or apocalypse via our AI overlords (Silicon Valley is always ready with a witty abbreviation) to be exaggerated, and yet the threat of deep fakes is very serious. Many of the engineers who signed the AI “pause” letter (including Elon Musk) are barreling ahead with developing it. In May ChatGPT developer Sam Altman bucked the recent tradition of hostile encounters between tech entrepreneurs and government by appearing before Congress to call for his own regulation. One scholar said of Altman’s testimony, “It’s such an irony seeing a posture about the concern of harms by people who are rapidly releasing into commercial use the system responsible for those very harms.” In this area terror, enthusiasm, caution, and guarded optimism, protectiveness and a readiness to cash in, seem to inhabit many minds simultaneously to varying degrees.
Even as the subject seems awash in contradiction, though, the practical consequences begin to press upon us. In the field of writing and literature, Hollywood writers scored a major victory this week by negotiating an agreement with Alliance of Motion Picture and Television Producers, after a 146-day strike, that included constraints on the use of AI to displace them in screenwriting. At the start of the strike the studios were adamant against such a provision—not only because they expected to use AI as a practical screenwriting tool but also because they expected to use it as leverage to limit writers’ fees. The New York Times reported the AI dispute was “the last sticking point in the negotiation.” Observers had not been optimistic. Describing the AI demand back in April, the Times noted that “unions, historians say, have generally failed to rein in new technologies that enable automation or the replacement of skilled labor with less-skilled labor.” They quoted historian Jason Resnikoff, who studies labor and automation, saying he was “at a loss to think of a union that managed to be plucky and make a go of it.”
The writers’ victory seems propitious for the still-striking actors, who also want to constrain studios’ use of AI to preserve and reproduce the instruments of their art—their images and voices. Narrators of audiobooks are resisting the temptation to license their voices into bot-readers for the price of about four years’ work. James Earl Jones, at ninety-one, threw in the towel on personally inhabiting Darth Vadar, but passed on the rights to resurrect him electronically. Google is said to be negotiating with Universal Music to license songs for users hankering for the total karaoke experience, producing their own tune that sounds like Gershwin with Mick Jagger’s voice, and so on. (Addendum: This week saw major advances in the use of voiced AI in our daily interactions with machines.)
Because their work still has the ability to move large sums, creative people in the performing arts have the opportunity to set a marker where others in the writing trades feel their advantage slip away. “Your boss wants AI to replace you. The writers’ strike shows you how to fight back,” proclaimed the Los Angeles Times, but already writers are seeing professions evanesce. Tech journalist Casey Newton wrote recently that Google’s nascent efforts to replace search results—“the lifeblood of many publications”—with AI-generated answers to questions “has already begun to nibble away at the kind of quick-turnaround ‘what time is the Super Bowl?’ style posts and affiliate links that currently fund thousands of reporting jobs around the world.” At the beginning of May I was drinking with aggrieved translator and translation-editor friends in Germany who were already seeing their work dry up. Some AI companies are actually hiring poets to give their results the polish that made OpenAI’s chat bot ChatGPT such a success (human intervention gave ChatGPT its special shine, mostly provided by the sort of low-wage workers who shield us from snuff videos on Facebook, but some researchers fear that these interventions may be contributing to political bias in results). Julian Posada, an information-science researcher at Yale interviewed for the hired-poets story, “questions whether creatives will accept this work as a sustainable source of employment.” One recalls the audiobook narrators’ reluctance to phase themselves out by following the path of James Earl Jones.
Up at the top of the literary food chain though there has been more organized resistance. Media mogul Barry Diller has been assembling a consortium of publishers to sue large language model proprietors for copyright infringement for their use of published work to “train” models at their central job of language prediction. “Search was designed to find the best of the internet,” Diller has said. “These large language models, or generative AI, are designed to steal the best of the internet.” The New York Times backed away from Diller’s consortium, but its own effort to forge a separate peace with OpenAI has collapsed. As reported on NPR, “a top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper's staff.” The Associated Press has signed its own licensing agreement with OpenAI.
As Alex Reisner has reported in The Atlantic, “shadow libraries” of illegally digitized books have also appeared in the “training sets” for large language models (the piece offers authors a tool for seeing if their own book got swept in). In July the comedian Sarah Silverman joined authors Christopher Golden and Richard Kadrey in class action lawsuits in California against OpenAI and Meta, arguing that their works had been “copied and ingested” “without consent, without credit, and without compensation.” In September a group of more than a dozen fiction writers including John Grisham, Jonathan Franzen, Elin Hilderbrand, George R.R. Martin, and George Saunders filed a similar suit in New York, in conjunction with the Authors Guild, and Michael Chabon, Ayelet Waldman, and Matthew Klam filed another federal suit against OpenAI and Meta in San Francisco.
Observers hope that discovery in these cases might lift the veil on the secretive internal processes of large language models. Attorneys argue that, although these authors’ books are small drops in the huge seas of language the models draw on, the bots can nevertheless produce information that can only be derived from their books. (Jane Friedman notes that the Authors Guild case rests on the special claims of fiction writers, as their books are based on created worlds, entirely identified with them, and not publicly available facts.) The reading subscription service Scribd prohibited subscribers from using its database to train large language models, and textbook publisher Pearson sent out a cease-and-desist order in May after its share price fell when a rival said its business had been hurt by ChatGPT. A Washington Post analysis of a data set used to train a Google large language model found that the copyright symbol appeared in it more than 200 million times and it had ingested half a million personal blogs, in addition to many works from major publishers. Reddit has announced it will charge AI developers for access to its archive.
Meta and OpenAI have filed to dismiss most of the claims in the Silverman case on grounds of fair use; a 2016 ruling allowing Google to provide “snippets” of scanned books under copyright seems to support that position. But the vast riches and central market positions that the major tech firms have achieved in the intervening years by repurposing others’ intellectual property to their own enrichment has left creative people and the businesses that support them in a fighting mood. Ben Smith wrote in Semafor that requiring companies like Google to pay for the use of writers’ work would be “a dramatic change,” as they were able to build a “high-margin business in large part because they—unlike media companies from Netflix to Comcast—don’t pay for content.”
Read Part Two of this post here
Ann Kjellberg is the founding editor of Book Post. Find more of her Notebook posts here.
Book Post is a by-subscription book review service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriber to support our work and receive our straight-to-you book posts. Adrian Nicole LeBlanc, Jamaica Kincaid, Marina Warner, Colin Thubron, Reginald Dwayne Betts, John Banville, Lawrence Jackson, Nicholson Baker, Ingrid Rowland, John Guare, Álvaro Enrigue, Emily Bernard, more.
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
If you liked this post, please share and tell the author with a “like.”
A brilliant and incisive report. Yikes!
Thank you for this informative post, Ann. I have been pretty much avoiding this subject for various reasons. Your article is a good place to stop hiding my head under a rock. I have also been baffled at how secretive the AI process is, or seems to be. Who exactly are the people creating it? And I also can't stand the 'please regulate us' ploy. So disingenuous. The whole thing kind of seems to stink ..