Landscape with precious commodity: a Microsoft data center in Washington State. A cloud-based supercomputer using technology from Nvidia that Microsoft opened in August to host its artificial intelligence capacity was in November declared the third fastest supercomputer in the world
Read Part One of this post here!
Meanwhile in artificial intelligence, the new technology that threatens to upend our way of life, industry leader OpenAI on March 8 reinstated recently ousted and restored CEO Sam Altman to its board, newly populated with industry leaders (as opposed to computer scientists and technology scholars), and announced that an investigation by the law firm WilmerHale, not publicly released, had exonerated him. With options like the mercurial Elon Musk and the robotic Mark Zuckerberg, folks seem relieved to reposit their trust in this well-spoken and civil individual, and yet here we are again with power over a black-box technology resting in one guy and beholden to a major corporate financial interest, the original nonprofit board’s pretentions at offering a more publicly-minded model now neatly dusted away. Elon Musk himself recently sued OpenAI, largely, apparently, out of pique, and yet within his complaint some observers acknowledged a trace of shared regret at the abandonment of a more humanistic vision for tech governance. Sam Altman on Monday assured podcaster Lex Fridman that “what we’d really like is for the board of OpenAI to answer to the world as a whole” and be an agent “for the public good,” but I guess we just have to trust him on that. Facebook and Google once made similar avowals, claiming, as Altman does in this interview, to doing us good by offering their product for free, which turned out in in their case to have been an illusion: we paid for these marvelous services with personal information that they have abundantly monetized. In the Lex Fridman podcast Sam Altman deflects questions about OpenAI’s intentions on that score.
Pursuing AI continues to shape the tech landscape and our economy. Apple recently abandoned a many-year electric car project to work on AI; the failure of a Google AI initiative sent its stock plummeting. The manufacturers of chips needed for processing the so-called large language models, Nvidia, saw a 243 percent growth in the value of their shares last year and now has a $2.2 billion market capitalization. On Monday it launched its newest generation of chips in an auditorium seating eleven thousand people. Sam Altman is reportedly out trying to raise money to build his own competitor. In last week’s podcast with Lex Fridman, he said that “compute,” or computational capacity, “is going to be the currency of the future. I think it will be maybe the most precious commodity in the world.”
It’s an environment that continues to reward the big players. As Sam Altman buys up the raw material for building artificial intelligence, The Washington Post reported that the hunger for AI engineers in Silicon Valley is pulling engineering talent out of academia, limiting the scope of research. The New York Times and a number of authors have sued OpenAI for using their writings as artificial intelligence “training models,” while other publishers and platforms are making deals for use of their work; this is perhaps where things are heading. Meanwhile Meta is shutting down Facebook’s news tab and its data transparency tool CrowdTangle; political posts dominate on Threads, even though its owner Meta disavows engaging with news or politics; and Google’s AI Chatbot won’t answer questions about elections. Max Tani writes in Semafor that it’s harder and harder for journalists to do a big investigative story, as they are more and more operating without institutional supports and their powerful subjects are often able to marshal overwhelming pressure, including harassment on unmoderated tech platforms, against criticism. Recent Random House CEO Madeleine McIntosh announced earlier this month the creation, with several other publishing heavy-hitters, of a new imprint, Author’s Equity, that gives writers a bigger stake in their sales but offers no advance and maintains little permanent staff, again shifting the balance in favor of those with built-in audiences and existing resources. Scott Galloway said on the Pivot podcast with Kara Swisher that employees in companies that are not seeing growth (for example, publishing and journalism) will feel the pinch most from artificial intelligence, where they don’t have new hiring to offset productivity gains and redundancy from AI; analysis of Upworthy data is already showing freelance writing and translating jobs seeing the most losses. Even movie stars are showing the strain from diminishing job protections in creative fields.
What struck me about the recent Supreme Court arguments about free speech and the internet was that to acknowledge platforms’ First Amendment rights to pursue an editorial vision and keep the internet from being a gladiatorial free-for-all seems in conflict with Section 230 of the Communications Decency Act, the 1996 federal statute that exempts social media platforms from liability in what they publish. (The Supreme Court reaffirmed Section 230 in a ruling last May.) Scott Galloway has said that news outlets receive an effect 40 percent tax (not sure how he arrived at 40 percent), because they are responsible for fact-checking and verifying what they publish against a threat of liability, whereas the platforms, benefiting from carrying the material that news outlets and others develop, have no such responsibility; Kara Swisher says it has been easy for the tech giants to make money in the twenty-five years in which they have gone without meaningful regulation because “they don’t pay the costs, they don’t pay the actual costs, the society ends up paying for them. The real costs aren’t built in because they have no guardrails. Other companies have guardrails, including liability.”
This freedom from liability puts the platforms at a powerful strategic advantage over actual newsgathering, as well as documented research in, for instance, history and the sciences. As unaccountable internet distribution threatens newsgathering with extinction, there will be fewer and fewer authoritative sources for the platforms and chatbots to draw upon. Kara Swisher observes that removing Section 230 protection from content that is algorithmically distributed would also expose the platforms to liability for harms to children and teenagers. Section 230 seems unsustainable alongside a regime that anticipates any responsible moderation by the platforms, and modifying Section 230 would seem at the heart of addressing the dangers of an unregulated internet, especially as long as the internet’s readership has so few avenues for making choices online. There even seems enough bipartisan consensus around these issues that drafting legislation would be possible, if voters could commit to electing candidates interested in legislating—although it’s possible that the tech companies’ growing lobbying power provides sufficient inertial force. Without intervention, the existing commercial and technological incentives all line up around bombarding us with information that is false and harmful, and around drowning out or starving whatever smaller voices might take the fewer and fewer titans to account.
Bookstores kept coming up in the Murthy arguments as examples of the value of an established freedom of editorial judgment. Of course we have lots of bookstores and you can chose to go to one that reflects your values. A seminal 1963 precedent in these cases, Bantam Books v. Sullivan, involved bookstores, ruling that a state “Commission to Encourage Morality in Youth” could not pressure them to remove allegedly obscene books from their shelves. Kate Klonick writes, with reference to the Murthy arguments, that bookstores “need not sell ALL THE BOOKS in human existence, and as [Justice Sonia] Sotomayor rightly points out, if they did, they would not be useful to us, because it would exceed our ability to process the options. We rely on the curation of the bookstore, and accordingly their right to do so.” I couldn’t decide whether to be delighted or depressed when I saw Daphne Keller tweet, “Everyone has their own version of AI panic. Mine is wanting an immediate moratorium on libraries pulping the old, little-used books in their stacks. Of course I wanted that anyway. But the spiraling unreliability of digital information shines a new light on the value of old books.”
Some other Book Post Notebooks on technology, writing, and ideas: On TikTok, virality, journalism, and books; on artificial intelligence; on the Facebook files and known harms of social media, including for children; on Twitter, tech, and journalism, the writer of the future
Ann Kjellberg is the founding editor of Book Post.
Book Post is a by-subscription book review delivery service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriber to support our work and receive our straight-to-you book posts. Recent reviews: Sarah Ruden on Marilynne Robinson’s Genesis; John Banville on a new book on Emerson by James Marcus; Yasmine El Rashidi on Ghaith Abdul-Ahad’s personal history of the Iraq War.
Square Books in Oxford, Mississippi, is Book Post’s Winter 2023 partner bookstore! We partner with independent bookstores to link to their books, support their work, and bring you news of local book life across the land. We’ll send a free three-month subscription to any reader who spends more than $100 with our partner bookstore during our partnership. Send your receipt to info@bookpostusa.com.
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
If you liked this piece, please share and tell the author with a “like.”
Not ready for a paid subscription to Book Post? Show your appreciation with a tip.
so great to be reading your thoughts on this. I've been leading "writing with AI" workshops for teachers through IWT at Bard College, and the wide range of responses to and implications of Gen. AI is . . . rapidly evolving. Thanks, Ann!
Your comments on generative AI are insightful, especially because you take a broad view.
I've worked with and tried to use various technologies described as AI since I wrote my first code in the late 1960s. I agree with the general feeling that genAI is significant, but I've lived by the tech hype-cycle long enough to question anything captured by the tech marketing machine.
GenAI has been accepted and taken up with astounding speed. Perhaps the post-pandemic stagnation and layoffs in tech have something to do with GenAI's rapid incorporation in products like Microsoft Office, but I suspect it's more than that. Folks seem to realize that genAI potentially eases a lot of the inane drudgery that precedes creative products and therefore fear that jobs will be eliminated.
I've been trying to eliminate jobs with software for the past three decades. At first, I worked on these projects because I loved the potential of software to do things better. When I was young, I simply did not think about the jobs I might eliminate.
With experience, I realized that I consistently failed to eliminate jobs with every new feature or product. Why? In business, when a department becomes more efficient, more business is generated and more people are hired. Word processing eliminated typing pools, but typists long ago moved on to other jobs. Office staffs got larger, not smaller. Typists became analysts, managers, and executives and the economy became more productive.
Roles change, but jobs are not eliminated. How will genAI change writing and authorship? I will state, counter intuitively, that writing quality will improve on the five year horizon. The inane chatter and hallucinations of genAI will quickly repel audiences. (In another place, I will argue that eliminating genAI hallucinations is impossible.) Writing will become more firmly based in facts requiring more research. Human insight and imagination will become much more important than it is today. Will dullards be out of jobs? No. They will either take on new challenges and cease their dullest habits, or they will find places supporting the new rush of creativity from those who do step up.
Watch the five year horizon.