I’m writing about layoffs again, but this time to a different end!
The year began with hundreds of layoffs, now from big tech companies, after Meta, Amazon, Microsoft, Google, and Apple had already cut in 2023 more than a hundred thousand jobs from their post-pandemic peaks of 2020 and 2021. January saw 25,000 layoffs across roughly a hundred firms. But analysts don’t consider these contractions a sign of weakness; on the contrary these five companies have gained nearly $3.5 trillion in market value in the last year, earning them, with the addition of Tesla and chip-maker Nvidia, the moniker “The Magnificent Seven” for their power over the US stock market. In 2023 their gains contributed more than 60 percent of the total return on the S&P 500, accounting for a significant share of its strength. Last week Meta set a record for market capitalization.
The current rounds of layoffs are partly to offset growth during the pandemic, when everyone was stuck at home and needed technological support to do just about everything, but a large share of it reflects an industry-wide redirection of resources toward the development of artificial intelligence. As the New York Times put it, the companies are “spending billions to build A.I. technology that they believe could one day be worth trillions,” by hiring scarce and highly paid engineers and building costly infrastructure to create and maintain AI systems. Meta is expected to spend hundreds of billions of dollars this year on chips from fellow magnificent-sevener Nvidia, and Microsoft partner OpenAI’s CEO Sam Altman is raising billions to invest in his own chip manufacturing initiative. Unexpectedly rapid advances in the technology of artificial intelligence has brought within view (four to five years, Sam Altman thinks) a goal that researchers used to consider remote: “Artificial General Intelligence” (known by the acronym AGI), the arrival of “AI systems that are generally smarter than humans.” Each of these companies wants to be at the tiller when this prospect breaks.
So, we have these giant corporations that already dominate the American economy and (as discussed in our last notebook) increasingly our channels of communication, vying to control the development of a mechanism that will, per a Time magazine profile of Sam Altman as 2023 “CEO of the Year,” “turbocharge the global economy, expand the frontiers of scientific knowledge, and dramatically improve standards of living for billions of humans—creating a future that looks wildly different from the past.” As technology writer Casey Newton writes, “if we accept the premise that one or more of these companies really does build AGI, I think our ability to imagine what the future looks like deteriorates pretty quickly. The resulting changes to our economy, our politics, and how we spend our time would seem to be so significant that I'm not even sure it's worth trying to guess what [the company that builds it] looks like afterward.”
One cannot help but observe that, after learning in the last ten years of the potentially dire consequences of leaving the development of society-altering technologies to a few tycoons, we don’t seem to have figured out much in the way of alternatives to that approach.
It was the enormity of the potential consequences of AGI that caused Sam Altman and his OpenAI co-founders (originally including fellow “Magnificent Seven” CEO Elon Musk, who went off to found his own AI shop) to create a unique nonprofit group to work on AGI in 2018, “with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity … while remaining unencumbered by profit incentives.” In 2019 they developed a “capped profit subsidiary” in order to allow the company to raise money from investors, including an eventual $13 billion from Microsoft, which has a “minority economic interest” in OpenAI, and by the way making possible their own relatively muted enrichment. The whole mechanism was called into question last fall, just a year after it created a sensation with the release of the chatbot ChatGPT, the fastest growing tech product in history, when the nonprofit board fired Sam Altman and then, when he appeared to be on the brink of starting a competing shop at their investor Microsoft, rehired him. By some accounts, it was a technological breakthrough in the direction of AGI that set in motion the events leading to Sam Altman’s ouster.
The episode appeared to be a defeat for the nonprofit model, eventuating in the departure of the two board members (both women) who were scholars and scientists (as opposed to entrepreneurs) and most aligned with groups, like the Effective Altruism movement, that had raised concerns about the potential long-term consequences of developing AI “superintelligence” before humanity was ready for it. (Technologists call this “task of ensuring that AGI conforms to human values,” perhaps euphemistically, “the alignment problem.”) But other accounts of the byzantine episode suggest that the resolution reasserted the independence of the board by removing Sam Altman from it and promising an open investigation into concerns about his management. As far as I can tell these promises are as yet unrealized: we have not had much news of this board’s progress toward more enlightened governance. Ben Thompson has argued that the heavily invested Microsoft “ultimately held the trump cards because of its combination of compute capacity and access to OpenAI IP”; they came out of it with a nonvoting seat on the board.
The absence on the resulting board supposedly safeguarding the interests of humanity of any scholars or researchers, and the odd presence of newly appointed Larry Summers, the infamously combative former US Treasury Secretary and Harvard president who most recently had to apologize for accusing Columbia’s Edward Said Professor of Arab Studies of being an antisemite, are concerning. Of the work of the one remaining board member from the prior regime, Casey Newton recently said “Quora CEO Adam D’Angelo once sought to build a community for writers, and has now all but abandoned that vision in favor of a marketplace for robot assistants.” New York Times tech columnist Kevin Roose wrote of the outcome “AI Belongs to the Capitalists Now.”
The board now consists only of the money men, all of them white. Wired reported that women, who are underrepresented in AI development overall, are more visible in the (sidelined) area of “ethics and safety.” (Meta seems exemplary in this respect.) Yet prominent women raising ethics concerns around tech seem to keep getting shunted out, like Joan Donovan, who alleges that her 2023 dismissal from a Harvard research team on disinformation was at the behest of donor Mark Zuckerberg. Timnit Gebru, who was fired from Google in 2020 over research disclosing bias in AI systems, told Wired for their story “Prominent Women in Tech Say They Don’t Want to Join OpenAI's All-Male Board,” that the prospect of serving on the OpenAI board was “repulsive to me.”
Mark Zuckerberg argues that, although he personally wields absolute personal power at Meta thanks to owning a majority share of the company’s stock, Meta’s pursuit of AGI will be more democratic than OpenAI’s because he is following an open-source model, releasing his AI products to be adapted by other developers, faulting companies like OpenAI for keeping their models proprietary. Open source also allows Meta to benefit from others’ work and augurs a plethora of products compatible with Meta’s architecture, speeding, in Mark Zuckerberg’s view, Meta’s progress toward AGI. (Meta also benefits from access to vast seas of our personal data from Facebook and Instagram for training its AIs.)
OpenAI originally intended to make its models open source (hence “OpenAI”), but changed course because of some unknown balance of business interest and caution about potentially putting such tools in the hands of criminals and malefactors, a concern that Mark Zuckerberg does not have a visible answer to. Time tells us that OpenAI initially hesitated to release an earlier iteration of GPT “fearing it could have a devastating impact on public discourse” but in the end rushed to get it out to beat a competitor, according to Karen Hao and Charlie Warzel in The Atlantic. “The fundamental belief motivating OpenAI,” according to an employee who spoke to Time, “is, inevitably this technology is going to exist, so we have to win the race to create it, to control the terms of its entry into society in a way that is positive. The safety mission requires that you win. If you don’t win, it doesn’t matter that you were good.” Sam Altman said in the same profile that the odds that AGI “will destroy civilization” are “nonzero” but “low if we take the right actions,” which I guess we are supposed to count on him to take. As Sam Altman himself said last summer, before all this went down: “No one person should be trusted here. The board can fire me. I think that’s important.” Over at Google, Wired reported last week that the team at their “office of compliance and integrity” responsible for oversight of AI development had been broken up, and its leader, who had a part in drafting Google’s principles for “responsible AI practices,” along with a “lead AI principles ethics specialist,” both women, had left the company.
Meanwhile OpenAI’s relationship with Microsoft is being investigated on anti-trust grounds, and the nonprofit’s IRS filing of a mere $45,000 in profits for 2022 is raising eyebrows. “I don’t know at this point that this is a regulatory oversight issue. I think this is a public trust issue,” Mark Surman, president of the Mozilla Foundation, told CNBC. “OpenAI needs to figure out which direction it wants to take. If they want to be seen as this public institution making sure AI is in the service of humanity, we need a lot more transparency.” Tim Cook, the chief executive of Apple, which is set to announce its own AI initiative later this year, reportedly told analysts a week ago, “our M.O., if you will, has always been to do work and then talk about work and not to get out in front of ourselves.” So, not much transparency to be expected there either.
Sam Altman and Google CEO Sander Pichai have a perhaps disingenuous habit of calling on the federal government to regulate AI development, but it’s hard to be too optimistic about that prospect in the current political moment. (CNBC reported that AI-related lobbying increased by 158 percent in 2023.) Precedents like Meta’s Oversight Board, with its limited remit, do not inspire much confidence that these companies are equipped to regulate themselves. I wish we were hearing more from the many think tanks and institutes whose denizens are quoted in articles about AI with concrete, public-facing proposals for managing this situation more effectively than we have managed powerful, nascent technological tools in the past.
I think it says something important that the mighty implications of this new technological force first became known to many of us through a device that talks to us: through words. It is in words that civilization encapsulates its knowledge to pass on: in oral tradition, in scripture, in literature, in history, in news. That the vehicles for preserving and transmitting words are more and more entrusted in a few entities guided solely by their own financial interests, perhaps incentivized (at times) to deceive, to manipulate, to coerce in pursuit of those interests, empowered to shape what we learn of the world and even to relinquish that power to nonhuman entities, should cause us all to sit up. Casey Newton has written, in the aftermath of the OpenAI imbroglio, of “the world we now find ourselves in, where AI safety folks have been made to look like laughingstocks, tech giants are building superintelligence with a profit motive, and social media flattens and polarizes the debate into warring fandoms”: “I suspect it will now be a long time before anyone else in this industry attempts anything other than the path of least resistance” in oversight of AI. Time quoted one of those think-tank scholars, Divya Siddarth, the co-founder of the “Collective Intelligence Project,” to the effect that these events have “put into sharp relief that very few people are making extremely consequential decisions in a completely opaque way,” as well as Daniel Colson, executive director of the “Artificial Intelligence Policy Institute,” who “believes the episode has highlighted the danger of having risk-tolerant technologists making choices that affect all of us. ‘Unfortunately, that’s the dynamic that the market has set up for.’” “Risk-tolerant technologists”: those are the folks this Croesian struggle for survival seems designed to deliver to us as the dominant strain.
Ann Kjellberg is the founder and editor of Book Post.
Please join us for Book Post Fireside Reading! Every Sunday in February we read a section of Willa Cather’s My Àntonia with critic . Subscribe to receive installments and follow the conversation here!
Book Post is a by-subscription book review delivery service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriber to support our work and receive our straight-to-you book posts. Coming soon: John Banville, Sarah Ruden, Adrian Nicole LeBlanc, more!
Square Books in Oxford, Mississippi, is Book Post’s Winter 2023 partner bookstore! We partner with independent bookstores to link to their books, support their work, and bring you news of local book life across the land. We’ll send a free three-month subscription to any reader who spends more than $100 with our partner bookstore during our partnership. Send your receipt to info@bookpostusa.com. Read more about Square here in Book Post. And buy a Book Post Holiday Book Bag for a friend and send a $25 gift e-card at Square with a one-year Book Post subscription!
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
If you liked this piece, please share and tell the author with a “like.”
Thanks for doing the fact-collecting and hard thinking we should all be doing, Ann.
It's concerning to see the concentration of AI development in the hands of a few massive corporations, whose motivations may not always align with the broader interests of society. It's essential to have more public discourse and oversight in this rapidly evolving field to ensure that these powerful technologies are deployed responsibly for the benefit of humanity. Thanks for sharing!
Explore captivating Contemporary, Romance, Thriller & Suspense, Science Fiction, Horror, and more stories on my Substack for FREE at https://jonahtown.substack.com