Book Post

Share this post
Notebook: (2) Predicting for Text
books.substack.com

Notebook: (2) Predicting for Text

by Ann Kjellberg, editor

Jan 19
7
10
Share this post
Notebook: (2) Predicting for Text
books.substack.com
Going the way of the quill

Read Part one of this post here!
Amidst denials that a computer could ever replace a writer in the creation of actual literary art, several interviews with working writers already using artificial intelligence were tentatively rosy. Jay Caspian Kang, writing in The New Yorker with some prior knowledge of chatbots and some expert advice, was not able to arrive at a satisfactory literary product in his fictional forays with ChatGPT, but Kevin Roose on the podcast Hard Fork and education writer John Warner were able to improve results substantially by refining their inputs. Lincoln Michel interviewed novelist Chandler Klang Smith, who has been using a GPT-3-based program called Sudowrite to work on her novels for a year or so. She memorably described the experience as “like a robot has a dream about your work in progress and you get to decide if anything from that dream reflects what you're trying to do.” She said AI’s efforts to move work forward can “unlock ideas that seem like they were already buried somewhere … in the text.” Chandler Klang Smith found AI unhelpful in dealing with “macro stuff like plot and structure,” but over at The Verge, self-published “cozy mystery” writer Jennifer Lepp, who had been using Sudowrite and just began experimenting with ChatGPT, told Josh Dzieza that she was astonished that she could feed the chatbot a premise and some particulars and it could generate an effective story in the genre. Self-published genre writers are often writing for readers who consume hundreds of novels a year and are under pressure to produce at scale (see our post on self-publishing and romance). Jennifer Lepp said many writers she knows are wrestling with the implications of drawing on ChatGPT’s capabilities to speed the process. For these readers, does it matter what the balance is between human and machine participation?

Regardless of these more cheerful forays, several endemic dangers in generative AI present themselves. Two are neatly summarized on ChatGPT’s site itself: the bot “may occasionally generate incorrect information” and “may occasionally produce harmful content and biased instructions.” ChatGPT is a language model: all it does is predict language based on patterns it identifies in the very large pool it scoops out of the internet, and it can reproduce all the mistakes and ugliness it can find there, and add some more of its own. (Recall the 2020 scandal when Google fired AI researcher Timnet Gebru whose research identified, among other limitations of large-language models, the reproduction of prejudice.) Draft EU legislation creates “risk categories” that may channel new AI systems toward “low-cost” endeavors like online fooling around before taking up “high-cost” activities like, say, surgery. Yet, as AI researcher Chandra Bhagavatula told TechCrunch, “AI systems are already making decisions loaded with moral and ethical implications.”

Generative AI can be led to trot out racist tropes and sexualize images. AI recruitment tools can encode hiring bias (the Biden administration last fall produced a blueprint for an “AI Bill of Rights” protecting consumers from discriminatory and predatory algorithms). GPT-3 seems to have introduced “guardrails” limiting offensive results, but these are apparently easy to subvert, and also generate for its masters the sorts of moderation issues plaguing all content-agnostic platforms. Generative AI can also of course be deployed for nefarious purposes—cybercrime, deep fakes, non-consensual porn. ChatGPT asks users to commit to not using its result for politics.

Another underlying challenge to large-language, generative AI systems like GPT-3 is intellectual property. Everything ChatGPT does draws on work previously done by someone, and future generative models will constantly be sucking in new material to “train” them. Scholars predict that regulation and copyright prosecution will have to strike some balance of recognizing when AI models directly usurp and imitate specific artists and when the scavenge is more plural and covered by “fair use,” though some lawyers are arguing that all material drawn into such models should be licensed and creators compensated. Stability AI recently indicated that it would allow artists to opt out of the data set used to train the image generator Stable Diffusion; Getty Images banned AI content because of the legal risk; the online gallery DeviantArt created a “metadata tag” for images to warn off the AI trawler. It does start to feel like another phase in the digital bloodletting of remuneration from those who do the mental work that makes up our digital universe. (Relatedly, record labels have recently demanded a royalty hike from TikTok for the music that makes their videos so infectious.)

Our subscribers are reading: Ange Mlinko on Eliot, Karim Tiro on Plymouth
Become a paying subscriber and support our work
Support reading and writers in a fractured media landscape

Wielding tools with such powers and promise, especially in the face of our littered record when it comes to governance both within the tech industry and without regulation, has writers and those who work with language and the arts at once giddy and nervous. An argument about whether ChatGPT and other large-language tools can produce literary art (for: Stephen Marche, against: Ian Bogost, Walter Kirn) seems to hinge on whether you see the editorial hand of the human giving the machine its prompts and refining its results as salient. Amit Gupta, one of the founders of the program used by Chandler Klang Smith, told Stephen Marche, in the article that sparked Chandler Klang Smith’s interest, “the writer’s job becomes as an editor almost. Your role starts to become deciding what’s good and executing on your taste, not as much the low-level work of pumping out word by word by word.” Marche compares AI to photography and says “with hindsight, it’s clear that machines didn’t replace art; they just expanded it.”

In presenting at a conference last fall Google’s version of ChatGPT, which the much-huger corporation had not yet released, like the other large firms in the AI arms race, because of the models’ many weaknesses, particularly—for Google—its brand-unfriendly tendency to produce inaccurate results, researcher Douglas Eck emphasized that their model, with the friendly name Wordcraft Writers Workshop, had been designed to be interactive: “Technology should serve our need to have agency and creative control over what we do.” Reporter Ben Dickson commented, “without human control and oversight, AI systems like generative models will underperform because they don’t have the same grasp of fundamental concepts as we humans do.”

A lot of the disorientation around ChatGPT was visible in an incoherent interview with the widely published Stephen Marche on the podcast Intelligence Squared, in which he both claimed that ChatGPT could produce a poem indistinguishable from Coleridge and could write an essay he could publish in the Atlantic and that there is no way it could “replace human writing,” that computers will never be able to make something that “will go viral,” even though virality is, finally, a robotic phenomenon. He echoed Amit Gupta saying that the usefulness of chatbots will be in creating “first drafts,” which can be completely wrong, that we make human by correcting and revising them. In evaluating AI-assisted student work “you will be looking much more at the content than the clarity of expression,” even though the bot “doesn’t produce work that you can use out of the box”; the human art is in the refinement. If a human is checking the facts and polishing the finish, what is the bot contributing exactly?

I can see the argument for using technological tools, but this idea that “pumping out word by word by word,” or assembling ideas into a form and sequence, is writing’s “low level work” is unintelligible to me. The danger of ChatGPT and its siblings is that it is nearly indistinguishable from the human product and can easily, as Timnet Gebru and her colleagues discerned, generate, as the tools become more sophisticated, “illusions of meaning” that veil the fact that a language model does not actually understand or know anything, it is just borrowing text and mimicking patterns. Detecting the conscious presence and purpose in the appliqué authorial method that Gupta and Marche describe sounds like a possibly increasingly elusive undertaking. Casey Newton and Kevin Roose noted with unease on their podcast that a large-language model called Cicero had learned how to beat people at the game Diplomacy—i. e., persuade human players in a negotiation.

I can almost imagine a world in which a machine composes something that delights as much as Mozart or engages the mind with the complexity of Shakespeare. I suppose I should not close myself to the possibility of these now unimagined experiences. But it is hard for me to think my way around our historic association of these forms with human intention. If a student’s essay is not the record of a process of developed thought, do we need to find another way of recording developed thought? Or is the idea to delegate developed thought entirely? I can’t quite imagine my way into a world in which intellectual aspiration is no longer recognizable as the grist of the things that we make and admire, the labor of surmounting the pressure of the unknown, the effort to improve a partial or damaged world, because anything can be made without trying, by retrieving and stitching together what has been made before. But perhaps I just don’t know what I’m missing.


Ann Kjellberg is the founding editor of Book Post.


Book Post is a by-subscription book review service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world.

Become a paying subscriber—or give Book Post as a gift! 🎁—to support our work and receive straight-to-you book posts by distinguished and engaging writers. Among our posters: Jamaica Kincaid, John Banville, Mona Simpson, Reginald Dwayne Betts.

The book discovery app Tertulia is Book Post’s Winter 2022 partner bookseller. Book Post subscribers are eligible for a free three-month membership in Tertulia, sign up here.

We partner with booksellers to link to their books and support their work, and bring you news of local book life as it happens across the land. Book Post receives a small commission when you buy a book from Tertulia through one of our posts.

Follow us: Facebook, Twitter, Instagram

If you liked this piece, please share and tell the author with a “like”

10
Share this post
Notebook: (2) Predicting for Text
books.substack.com
Previous
Next
10 Comments
author
Ann Kjellberg
Jan 20Author

My jaw dropped when I read that quote. I truly don't understand how putting the words together can be seen as distinct from writing; it's all one. Such a wise point here about how we have come to have to advocate for slowness in all things…

Expand full comment
ReplyCollapse
1 reply
Liz Rios Hall
Jan 20Liked by Ann Kjellberg

Thanks for this wonderful assessment, Ann! And for raising important questions for which there are, for now, no easy answers.

Also, thanks for highlighting the wonderful work of Timnet Gebru, who has, along with her frequent collaborators Safiya Noble and Ruha Benjamin, Meredith Broussard and others, been researching AI ethics and justice for years. It's been a bit frustrating to see how limited the recent flush of ChatGPT chatter has been. Scholars and critics have been thinking and writing about this for longer than the last few months.

On that note, have you read Meghan O'Gieblyn's work? She's a Pushcart Prize-winning essayist and critic who wrote my favorite essay (also one of the only essays!) on LLM technology. (You can read it here: https://www.nplusonemag.com/issue-40/essays/babel-4/) Her book God, Human, Animal, Machine is an extraordinary piece of thinking and writing. I feel like you'd enjoy it if you haven't read it already :).

Expand full comment
ReplyCollapse
3 replies by Ann Kjellberg and others
8 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 The Author
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing