Read Part One of this post here
As Ben Smith has written in Semafor, requiring companies like Google to pay for the use of writers’ work would be “a dramatic change,” as they were able to build a “high-margin business in large part because they—unlike media companies from Netflix to Comcast—don’t pay for content.” Building renumeration for writers and publishers in to the growth of AI is a bit of a shadow dance, Smith continues, as the AI companies “haven’t even figured out a business model for AI yet, and there are no profits to share in the crushingly expensive business of maintaining language models.” The streaming-centered TV and movie business (and music for that matter) is also in search of a stable profit model, but at least millions are still coming in as the ship lurches forward, giving writers leverage there. Casey Newton has pointed out that Google’s AI tools are already more highly developed in areas, like travel advice, where they have something to sell.
Publishers and writers as it is are suffering from the diminution of referrals from social media as Facebook has moved away from prioritizing news and Twitter has fragmented and users have migrated to platforms like Instagram and TikTok that don’t link out to original work; this next step threatens to cut original writing off entirely from the digital interface, except as a behind-the-scenes source of nutrition like soylent green. How to “cultivate a direct relationship with readers” without the vampiric intervention of platforms is the holy grail now for both news and book publishing (though books at least have bookstores, God bless em).
Even as AI gobbles up material to reproduce as its own product, and interposes itself between users and its intellectual sources, it emerges as its own fresh, if one can use that word, gusher of content. Jane Friedman, a writer we at Book Post often look to for publishing expertise, found herself impersonated by a flood of invented “Jane Friedmans” marketing advice to (ironically) writers on Amazon. Convincing Amazon to remove, much less police, these books written by bots pretending to be her turned out to be no trivial matter, even for a pro. Amazon is already “floundering in a mess of AI garbage,” Jane sighs. The New York Times had an investigation in August on shoddy and misleading travel books on Amazon that appeared to be AI-generated, and the New York Mycological Society issued a warning against potentially lethally unreliable fake mushroom guides. Jane Friedman has identified a number of companies and professionals openly marketing guidance on making a buck selling machine-generated texts. The science-fiction/fantasy magazine Clarkesworld had to shut down in May because it was overwhelmed by AI-“written” stories.
In September Amazon updated its policies to require disclosure when books uploaded to the site are not authored by a person, and a week later limited the number of books one can upload to three a day, though Amazon does not disclose this information to consumers and has not announced any enforcement provisions (Authors Guild statement). As so often in tech, safety and authentication seem to get far less development attention than exploiting new technology oneself for commercial purposes. (Amazon meanwhile was just this week charged in a long-aborning Federal Trade Commission lawsuit for monopolistic practices long known to their first guinea pig, the book industry, that include indulgence of piracy and impersonation, and Google is embroiled in its own landmark trial over Department of Justice charges that it stifled competition to secure its dominance over search, dominance that puts writers and publishers at the mercy of its chat bot, and faces possibly even more damaging charges from a DOJ suit filed in January that it rigs the advertising market that shapes what that search shows us. These trials, along with conflicting rulings over the legislation that exempts social media platforms from liability in what they publish, could fundamentally reset the power that the tech giants hold in society as they race to develop these tools.)
In March the US copyright office clarified that AI-generated texts are not eligible for copyright, though AI “assisted” texts are. Earlier this month, the UK outlined “seven principles” for regulating AI, mostly focused on consumer protection, and the EU, which has been developing AI rules for many years, continues to negotiate long-awaited proposed legislation that is expected to set an international standard. In June thirteen European consumer protection groups wrote to EU and US authorities urging them to investigate and regulate potential harms to consumers as they wait for the rules to take effect. Vice President of the European Commission for Values and Transparency Vera Jourova gave a speech in September encouraging companies working with AI to adopt the bloc’s voluntary Code of Practice on Disinformation ahead of implementation in order to protect EU parliamentary elections next year. In the US, in addition to the Congressional inquiries noted above, the White House is expected to release an executive order on AI later this year and announced two weeks ago that eight more companies had signed a voluntary pledge to follow “standards for safety, security and trust” announced in July. The Federal Trade Commission has sent ChatGPT parent OpenAI a demand for records about how it addresses consumer protection risks. The Journalism Competition and Preservation Act, which would enable news outlets to negotiate with tech platforms, advanced from the Senate Judiciary Committee in June, and the Authors Guild has issued recommended language for authors’ contracts to prohibit unlicensed use of books in AI training sets. The cooperation of big tech firms with regulatory efforts is welcome, but smaller companies warn that their presence at the drafting table threatens to skew regulation to their benefit and limit competition.
Last week Google announced a new feature in its version of ChatGPT, Bard, that allows users to check the chat bot’s results against facts retrievable on the internet, a hedge against one of the most serious liabilities of large language models, their readiness to be “confidently wrong” because their results are based not on verifiable facts in the world but only on language prediction. When you click on an icon alongside a Bard result, Bard will search the web to substantiate it and provide “supporting or contradicting information.” This information is distinct from the mechanisms that Bard used to arrive at its result, and, as Casey Newton has written, “the task of steering chatbots toward the right answer still weighs heavily on the person typing the prompt.” He continues, echoing Lanier, that “tools that push AIs to cite their work are greatly needed.”
While I’d want to leave a space for exploration and advancement, however remote some of the benefits seem from the distinctly analog world I occupy, for me, as I’ve indicated, there is no comparison between language made out of mimicking other language and language that emerges from the constantly renewing flesh-and-blood effort to communicate human experience—to bridge solitude and the unknown, to recognize emotion, to understand our senses and our physical presence, to care for others, to wrestle with suffering and mortality, to renew and refresh the legacy of thought, to understand ourselves and the constantly transforming experiential mysteries we face. Language is of the body and exhales from the experience of being a body—in history and philosophy and science and politics and journalism as much as art and literature. Our confusion trying to distinguish between these only-superficially-similar phenomena is perhaps an indicator of some of our woes. But I do think every single effort we can make to protect and honor human expression, to tie our technology to lived truth and a human voice, and to ensure that the irreplaceable labor of building civilization is one at which people can survive and thrive, is an effort worth making.
Ann Kjellberg is the founding editor of Book Post.
Book Post is a by-subscription book review service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriber to support our work and receive our straight-to-you book posts. Adrian Nicole LeBlanc, Jamaica Kincaid, Marina Warner, Colin Thubron, Reginald Dwayne Betts, John Banville, Lawrence Jackson, Nicholson Baker, Ingrid Rowland, John Guare, Álvaro Enrigue, Emily Bernard, more.
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
If you liked this post, please share and tell the author with a “like.”
You list so vividly and evocatively what writing does, or should do, you might consider offering this as a NYT op-ed, and get the message out.
Thank you for the second part of this fascinating essay, Ann. A couple of thoughts, off the top of my head. There is of course language meant to lift humanity up and language meant to help you figure out how your new washing machine or sewing machine, which is likely to be a computer, works. I guess what I'm thinking is, I can see that AI taking over some things might not be so bad, or at any rate, much worse? For example, the DMV manual, which I had to read this summer to take a test (after 40 yrs of driving, long story). I am not convinced that a computer could make it that much worse, even if it had never driven a car before. (I did love the suggestion to "dress for all weathers" to take my test. I am still trying to figure out how to do that!)
There is language all over the place that is inaccurate or does not conform to reality, that is written by people, not machines.
But I don't especially want to read a novel written by a machine because I'm not interested enough in what machines can do. And this may be a flaw in me. Since the machines we make are just reflections of us ... I feel like with AI we could be trying to find out more about who we are. More about ourselves and how our minds work. But it seems the main motivation for AI development is making money, for the usual suspects, not enhancing human potential.
I agree completely that there should be transparency at every level so that people know what they are getting and where it came from. But we don't have that now and it is hard to see how we will have more of it in the future, in a way that matters. It's not like computers have taught any of us patience .. or to read the fine print ... of course 'they' don't really want us to read the fine print .. they just want to make the world a better place, for gosh sakes!
Anyway these are just some ramblings from my tired brain this evening ... thank you again for an educational and inspiring read.