Notebook: Immoderate
Social media is either suppressing politics or running wild with it; academic study of misinformation is shut down; election interference is rising. Is moderation possible in the internet we have?
Our institutions of communications and information seem more confused than ever about what is the right amount of talking about politics. Major newspapers and other outlets cannot decide if covering extremism and public lying and conspiracies helps to promote them or to discredit them. To reproduce is to amplify; to remain silent to acquiesce. Younger staff at book publishers and bookstores revolt against publishing and distributing the work of figures they consider malevolent, as the market for nonfiction declines and the field of book-length commentary is ceded to the loudest talking heads with the biggest followings, and those who are condemned by the activist wing cry censorship and consolidate their gains in their own publishing redoubts in the opposite encampments of the culture wars.
As has been widely reported, after the convulsions of Cambridge Analytica and the January 6 uprising and dueling covid realities, most of the major social media companies have stepped back from the content moderation that seemed urgently demanded by those events. After a muscular effort by Facebook in 2020 to monitor for election information manipulation, Facebook parent company Meta demoted all political content in the years following in favor of “friends and family,” recoiling from charges of political favoritism in its moderation decisions. When it announced its Twitter competitor Threads in 2023, Meta’s Instagram head Adam Mosseri openly proclaimed the platform would disfavor political speech; Facebook not only no longer sponsors a journalism-promoting news tab, it deprioritizes outgoing links to news outlets in personal posts, although reportedly half of US adults say they get their news from social media, predominantly Facebook. The New York Times recently described a chastened Mark Zuckerberg who, personally affronted by government challenges to his business, even pulled his private philanthropies back from projects that might be interpreted as political and muted employee activism. Meanwhile the Biden Administration’s efforts to curtail the spread covid disinformation contributed to a partisan political backlash leading to judicial decisions and bureaucratic guidance that can curtail the government’s ability to distribute public information at all. According to The Washington Post, for example, officials at the National Institutes of Health sent a memo last year warning employees “not to flag misleading social media posts to tech companies and to limit their communication with the public to answering medical questions.”
Big tech companies are also cutting jobs and moving resources out of safety and security roles and into the race for artificial intelligence. Facebook whistleblower Frances Haugen disclosed way back in 2021 that Facebook’s combination of electronic and human monitoring was not sufficient to protect its platforms from malicious use, especially in languages other than English; Facebook parent company Meta has since cut 21,000 jobs, including many in trust and safety and customer service. Mark Zuckerberg has also eliminated the regular briefings on election security that he held during the last presidential election and disbanded the company’s election integrity team, according to the Times. As an example of the kind of thing that gets through—in June the Times reported on a widely shared Facebook troll who made up stories, under the guise of “satire,” that elude Facebook’s breaks on “politics” by focusing on “culture war topics” like “Hollywood elites and social justice issues.” Meta discontinued a fact-checking tool that partnered with news organizations amidst layoffs in 2023 and discontinued its transparency tool Crowdtangle, whichu helped researchers and journalists identify trends on the platform, in August.
Meta’s withdrawal from political speech and commitment to election security has been complimented by an unleashing of political speech on Twitter/X, the platform of rival tycoon Elon Musk, who has broadcast his resistance to content moderation. (Mark Zuckerberg said in a podcast last year that Elon Musk’s reduction of content moderation staff at Twitter was “probably good for the industry.”) Ironically, while Mark Zuckerberg was denounced by the right for his foundation’s work on election security in 2020, Elon Musk’s super PAC, America Pac, is now running ground operations for the Donald Trump campaign, in the wake of the candidate’s dismantling of the traditional party apparatus. TikTok, whose ownership by Chinese parent company Bytedance has been challenged by recent legislation—responding in part to fears of the app’s vulnerability to political manipulation by the Chinese government—is guided by a black-box algorithm known for its hospitality to disinformation and viral spread.
Just as the tech platforms themselves remove their own safeguards against the spread of disinformation, nonprofit and academic institutions devoted to studying and identifying disinformation online are being squelched. In June one of the most prominent, the Stanford Internet Observatory, and its Election Integrity Partnership project, formed with the University of Washington to track social media activity that was “demonstrably harmful to the democratic process,” were shut down as a result of costs associated with lawsuits and politically motivated congressional inquiries (see our consideration of these last spring). Congressional opponents of scholarship in the spread of disinformation online have surfaced as critics of academic freedom in other respects: condemning university administrators for their handling of campus protests, limiting study in African American history and other contested fields, and supporting challenges to K-12 libraries and curricula.
Supporters of the Stanford project reported that philanthropic groups were also shifting their attention away from disinformation research to artificial intelligence, in part because of donor sensitivity to charges of political bias in the definition of “disinformation.” At public universities scholars found their work vulnerable to aggressive open records requests that publicized their personal information and exposed even undergraduate research assistants to public harassment. (Harvard disinformation researcher Joan Donovan charged that she was fired because of pressure from Meta, a major donor to her program, but an analysis by The Chronicle of Higher Education was not able to substantiate that charge.)
The nonprofit advertising consortium Global Alliance for Responsible Media closed in August after being sued by Elon Musk’s Twitter/X for encouraging advertisers to leave the platform because their ads were appearing alongside “harmful or risky” content. Twitter/X previously sued the nonprofits Center for Countering Digital Hate and Media Matters after they produced research showing a rise in hateful messaging on the platform; the Center for Countering Digital Hate suit challenged the measures the researchers used to access data on platform behavior. Representatives Lori Trahan, Sean Casten, and Adam Schiff sent a letter to X expressing concerns that independent researchers were being barred from access to data needed to study “the proliferation of hate speech and extremism online”; former Facebook policy director Karen Harbath told The Washington Post at the time, “if we're pushing for more transparency from these companies—if we don’t want them to continue to be black boxes—we need to be having folks from academia and civil society doing this research.”
Traditional news sources, too, have become uncertain about the boundaries of disclosure and abetting bad actors. In mid-August the Trump campaign reported that sensitive internal documents had been hacked, and on Friday the US Department of Justice charged three members of the Iranian Revolutionary Guard in the intrusion. In contrast to wide coverage of a trove of emails distributed by Russian hackers from within the Hillary Clinton campaign in 2016, in this instance Politico, The Washington Post, and The New York Times, declined to report on the hacked documents, which they received from an anonymous source. Some journalists marvelled at their restraint and questioned their motives. On Tuesday independent journalist Judd Legum reported that he had received an updated trove, of what he described as “stolen” documents that it “would be a violation of privacy and could encourage future criminal acts” to publish; he determined that the disclosures were not on balance newsworthy enough to justify collaborating with the perpetrators. On Thursday independent journalist Ken Klipperstein published one item from the collection, a file of preliminary research on J.D. Vance as a potential running mate, on his Substack, prompting Twitter/X to remove Klipperstein’s post of the link and close his account, although Elon Musk claimed he had been motivated to buy Twitter because of the decision by previous owners not to circulate contents from Hunter Biden’s alleged laptop published by the New York Post, shortly after the FBI had issued warnings to the social media companies to be on the lookout for hacked material peddled by foreign adversaries. (Twitter founder Jack Dorsey later regretted that decision.)
The even blacker box of large-language-model artificial intelligence threatens to be an even more fertile environment for spreading misinformation and ungoverned content. In June the Times reported on a story about an AI news factory called “BNN” that regurgitated stories from various sources, or made them up from whole cloth, that came to be shared by a number of legitimate news outlets. Monitors at those outlets checking incoming news items for veracity were, the story reported, themselves increasingly automated. In May ChatGPT creator OpenAI released a report saying that it had identified and disrupted five online campaigns by foreign governments (Russia and China) using its AI tools deceptively to manipulate public opinion and influence politics. On Monday the Director of National Intelligence and the FBI briefed reporters on an uptick in the use of artificial intelligence by Russia, Iran, and China to sway American opinion ahead of the November election. These indications of vulnerability to interference within the social media platforms and artificial intelligence tools come as English-speaking countries earlier this month imposed sanctions on Russia’s RT (formerly “Russia Today”) network over financing disinformation operations; Meta and YouTube banned RT; and the Department of Justice indicted two RT employees for channelling funds through a US media company to distribute Russian messaging ahead of the November election.
OpenAI for its part has been beset by upper level departures, including three in May and one in June, of employees engaged with safety and security in the development of its model. On Wednesday OpenAI’s Chief Technology Officer Mira Murati announced her immediate departure without explanation. OpenAI is reportedly abandoning its (increasingly cosmetic) status as a nonprofit, created to explore artificial intelligence in the public interest. One of the May departing executives, Gretchen Marina, expressed concern about the ways that “tech companies in general can disempower those seeking to hold them accountable,” saying, “we need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.” (Running AI computation consumes vast amount of energy; in July Google announced that it was no longer carbon neutral and Microsoft is struggling to meet its climate goals, Wired reported, as a result of the energy and water demands of AI computing.) Another, Jan Leike, said of the company that “safety culture and processes have taken a backseat to shiny products.” As the now unchecked leader of one of the most powerful tech companies in the world, OpenAI CEO Sam Altman has been a central presence in government explorations of potential AI legislation. Mark Zuckerberg has reportedly had two one-on-one conversations with Donald Trump this summer; he has not spoken with Kamala Harris. Some in tech have expressed the hope that since Kamala Harris arose from a political environment shaped by Silicon Valley she will reverse some of the the Biden administration’s more muscular efforts to place constraints on big tech, such as anti-trust enforcement by FTC Chair Lina Khan and Assistant Attorney General for Antitrust Jonathan Kanter.
By contrast in Europe legislators have been unified in combatting misinformation online and demanding accountability in tech for its disruptions of democratic processes. The EU passed in June, after many years of development, a law that threatens tech companies with significant fines if they do not erect safeguards against “negative effects on civic discourse and electoral processes” from their products. US Tech entrepreneurs argue that such laws place burdensome constraints in innovation and growth.
All this comes as old-fashioned verified journalism has less traction in traditional social media and is threatened with still less after AI mechanisms in social media and search begin summarizing sources of information instead of linking to news outlets bound to publicly available standards for sourcing and fact checking.
Meanwhile, on the ground, local election officials are saying that they are unable to get through to Facebook support to curtail disinformation about election rules and balloting, and that their public service messages with information for voters get no traction amidst the downgrade of political posts.
In 1985 Neil Postman wrote about “the typographic mind” as being foundational to American democracy: access to literacy, printed and preserved and distributable knowledge, was at the heart of a nation of laws and the sensibility that governance could be shared out across geography and social class. (Tocqueville: “There is scarcely a pioneer’s cabin where one does not encounter some odd volumes of Shakespeare. I recall having read the feudal dramas of Henry V for the first time in a log-house.”) Postman envisioned the intellectual security promised by typography in contrast to thinking in images, first in photography, then television. Now words have returned to us because they were, for a while, the cheapest thing to send electronically (though perhaps not for long, see the growth of image-making artificial intelligence and the attention the Zuckerbergs of the world are putting into virtual reality). But digital words have lost their characteristic of fixity, and with the loss of typography we are losing recourse to a stable shared reality, or the prospect of one. A way to rebuild for ourselves a new shared reality, however contested, a reality rooted in our need to be reliably informed rather than the financial interests of a few unchecked tycoons with apparently limited regard for the public interest, financial interests that seem precisely served by feeding us degraded information, should be our urgent concern. Agreeing in the very grain of our systems of communication that we have a common stake in the integrity of elections seems a necessary starting point for democratic stewardship of language.
Ann Kjellberg is the founding editor of Book Post. Read more of her Notebook posts for Book Post here.
Not ready for a paid subscription to Book Post? Show your appreciation for our free work with a tip
Book Post is a by-subscription book review delivery service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriber to support our work and receive our straight-to-you book posts. Our recent reviews include: Michael Robbins on the perils of translating Ovid, Christopher Benfey on the adventures of the bohemian Stevensons, Yasmine El Rashidi on the real, the invented, and the Egyptian prison memoir.
Dragonfly and The Silver Birch, sister bookstores in Decorah, Iowa, are Book Post’s Fall 2024 partner bookstores. We partner with independent booksellers to link to their books, support their work, and bring you news of local book life across the land. We send a free three-month subscription to any reader who spends more than $100 at our partner bookstore during our partnership. To claim your subscription send your receipt to info@bookpostusa.com.
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
If you liked this piece, please share and tell us with a “like.”
First-rate reporting and analysis, as always, but I sense the lights going out more rapidly than I would have thought. Scary.
Thanks, Ann. Lots to chew on here. I have real reservations about content moderation on social media platforms. From a First Amendment perspective, private companies are certainly entitled to exercise content moderation. And there is plenty of harmful content. But I might nevertheless prefer that they refrain, because I lack confidence that they will do it in a manner that is impartial. And I think those efforts will therefore only further undermine confidence in "the system" and will increase cynicism. It is, however, a tough call, and I'm not very sure about any of my own opinions in this area, if for no other reason than that I'm actually just not very interested in social media, so I probably don't pay enough attention to it to be well-informed!