Notebook: (1) Letting the Dogs Out
What Meta’s recently announced content moderation changes mean for writing and discourse
“The Haywain” triptych (detail), Hieronymus Bosch. Prado Museum, Madrid
I’ve been thinking about how not only federal policy but also private commercial decisions that mirror or anticipate it are quickly becoming enmeshed in the way we receive information, with consequences that will likely be far-reaching and difficult to reverse. On Tuesday, Meta CEO Mark Zuckerberg, wearing a watch reportedly worth $900,000 and a recently adopted blingy gold chain, heralded the arrival of a “cultural tipping point,” “a new era” calling forth extensive changes to what gets seen and read within the world’s largest social media company, under his sole charge, which commands the attention of 3.24 billion daily users, 70 percent of people on the internet. According to The New York Times, these changes had been developed in the weeks since the presidential election and Mark Zuckerberg’s Thanksgiving visit to Mar-a-Lago to attend to a president-elect who had earlier this year called for his imprisonment and whose future government is scheduled to take his company to trial for antitrust violations in April. President-elect Trump’s designated Federal Communications Commission Chairman Brendan Carr threatened Meta in November with “broad-ranging actions” if such changes weren’t made. According to the Times, the changes were the work of a small team of advisors meeting in close secrecy, including the Meta lobbyist (Meta outspends the other tech companies on DC lobbying by a wide margin), conservative political advisor, and Trump ally Joel Kaplan, who would five days before the announced changes be elevated to the position of head of Meta’s global policy, replacing former UK Deputy Prime Minister Nick Clegg, teeing him up to make Tuesday’s announcement on the president-elect’s favorite show, Fox and Friends. (Mark Zuckerberg chose to do his own interview, three days later, with podcaster Joe Rogan.) Ordinarily such policy changes undergo extensive internal review and consultation with civil rights and other public interest groups. This time, Mark Zuckerberg reportedly met with the president-elect and Joel Kaplan phoned some “conservative social media influencers” the day before the announcement, but Meta staff and the contractors whose work Mark Zuckerberg was about to axe learned about the changes at the same time as the rest of the world. Meta’s Vice President of Civil Rights resigned three days later.
In his Tuesday announcement Mark Zuckerberg made a significant fudge that misdirected responsibility for the harms that these platform changes were designed to redress. He opened his announcement by declaring the dissolution of the fact-checking program that Facebook had initiated after the 2016 election. He condemned his own program for political bias and implied that it was responsible for censorship of opposing views on the platform. Before Tuesday, as the Times’s Kevin Roose has pointed out, Mark Zuckerberg studiously avoided using the word “censorship” to characterize the moderation decisions that all social networks perform “as a matter of business” to weed out violent, spammy, offensive, and other posts that alienate users and advertisers. (Usually the word “censorship” is reserved for government action, as distinct from editorial or commercial decision-making.) In fact, Meta fact-checkers have no authority to remove or demote or otherwise adorn posts on Meta’s platforms. They are a group of third-party organizations recruited by Meta from a variety of political perspectives that signed onto a code of principles committing them to nonpartisanship and transparency. They identify for Meta alleged instances of misinformation on the platform and provide sourcing for their determination. It was up to Meta, under the direction of Mark Zuckerberg himself, to decide what to do with that information—ignore it or, sometimes, identify the post with a warning label and demote it in their algorithm. Charlie Warzel said on The Bulwark podcast that “you have to be so deep online and so aggressive about your political posting” to come across such labels. “In most of the cases where it’s not like porn or murder” posts flagged by the fact-checking department received “just a little note at the bottom, a little fact-check note.”
Decisions to remove flagged posts altogether were based not on fact-checkers’ judgments but on Meta’s internal “community standards.” One participant in the fact-checking program said “to my knowledge they didn't take anything down just because it was false—their takedowns were only with false information that could cause harm.” Another said that no one within Facebook or outside it had ever accused any of their judgments of being politically motivated. Meta has provided no evidence to support Mark Zuckerberg’s claim that the fact-checkers showed political bias. Meta also spent $280 million to create an independent Oversight Board to review decisions to remove posts. The Oversight Board released an anodyne statement on Tuesday supporting the changes, saying with equivocation unbecoming of a board dedicated to objective oversight that the company’s moderation practices had “rightly or wrongly … been perceived as politically biased.”
Mark Zuckerberg next announced that instead of employing fact-checkers Meta would follow Twitter/X in using “Community Notes,” a crowd-sourced program to identify posts that are harmful or untruthful that Elon Musk introduced after buying Twitter and firing most of its “trust and safety” staff. The Meta statement deferentially said “we’ve seen this approach work on X,” but as X is a private company that does not share internal data, nothing is known about whether its decision to weaken moderation or institute Community Notes has been either commercially successful or effective in handling harmful content. Meta’s program has not yet been created, but at Twitter/X Community Notes are generated by selected teams of volunteers who can be removed if their contributions are rated “unhelpful”; for a Community Note to appear these volunteers must reach a consensus across political divisions. Critics argue that the requirement of agreement means that significant instances of misinformation go unlabelled; and that volunteers for the Community Notes program have been shown to be motivated by partisan differences; and that Community Notes take too long to appear, allowing viral posts to go unchecked; and that (contra Meta’s vision for such a program) Notes on X are entirely non-transparent and not subject to appeal; and, as tech journalist Casey Newton has observed, that they often cite the very fact-checking organizations until now supported by the social-media companies’ funding. Researcher Sol Messing told NBC News: “The people who you get to participate [in creating Community Notes] will be incredibly important. I haven’t seen exactly how they’re going to recruit people to write Community Notes and how they’re going to ensure that it’s not just a bunch of partisan activists who are participating.”
A likely much more significant change to the platform, though, than removing the rarely-seen labels of third-party fact-checkers and introducing possibly negligible Community Notes, was the new policy Mark Zuckerberg rolled out further into his announcement, to loosen the platforms’ automated moderation systems so they capture only “high-severity violations of content policy,” such as those involving crimes like drugs and terrorism. Lower-severity violations, like harassment and abuse, will be addressed only when they have been reported by users.
In her testimony before congress in 2021, then-Facebook whistleblower Frances Haugen said that that the platform’s automated systems, especially in non-English-speaking countries, were themselves not sufficient to patrol for harmful content across billions of posts, and that Facebook was not prepared to devote funds to supplementing them adequately with human effort. Since then Meta has laid off more of their “trust and safety” staff. Now Meta proposes to leave the work of identifying harmful content entirely to users, even as content is increasingly siloed into private groups and more carefully targeted feeds of “unconnected content,” posts directed at users from people they do not follow, hoping to imitate TikTok’s success in capturing attention with an algorithmic feed based on users’ past behavior and interests. Hence users are more likely to see the posts of people who share their views, and perhaps less likely to report violations of community standards. Many users have certainly been frustrated by automated removal of harmless posts under the broad sweep of Meta’s moderation mechanisms, but Meta is about to discover whether they prefer finding their feeds swamped by spam, hate speech, foreign propaganda, and provocation—posts now increasingly generated in the millions by AI bots seeking easy marks.
Though Mark Zuckerberg has in his public statements blamed the Biden administration and “legacy media” for puritanical demands for content policing, in reality social media companies generally remove harmful content for business reasons—to keep users and the advertisers who follow them happy and on the platform. It was advertisers who most loudly protested Elon Musk’s suspension of content moderation on X. One former Meta trust and safety employee told Casey Newton, “I can’t tell you how much harm comes from non-illegal but harmful content … degrading, horrible content that leads to violence and that has the intent to harm other people.” Maria Ressa, the Philippines journalist who won the Nobel Peace Prize in 2021, said Meta’s withdrawal from moderation augurs “extremely dangerous times” for journalism, democracy, and citizens. Michael Harriot reminded his followers that Meta’s moderation, imperfect as it may have been, may well have significantly curtailed covid hoaxes, calls to extremist violence, and foreign election meddling, and that it probably saved lives.
Meta has also adjusted its internal community guidelines to relax constraints on posts they determine, by their own lights, to be “the subject of frequent political discourse.” Specifically, users will be permitted to say denigrating things about women, LGBTQ+ people, and immigrants, but not other groups. Former head of content policy at Facebook Dave Wilner posted that Meta had “seriously weakened its hate speech standards in a way that makes all protected groups generally, and immigrants, women, and LGBTQ+ people specifically, much more vulnerable to abuse on the site,” saying that this was a seismic shift away from policies that had been standard since he wrote them at Facebook in 2009. Alex Schultz, the company’s chief marketing officer and highest-ranking gay executive, defended the changes internally, saying, according to the Times, that “Meta’s policies should not get in the way of allowing societal debate” on issues that Meta judges to be “political,” as distinct from just maligning people on presumably non-controversial grounds like their ethnicity or religion or some other characteristic …
Read Part Two of this post here
Ann Kjellberg is the founding editor of Book Post. Some of her related Notebooks on technology and the reading life include:
“Immoderate”: On moderation and politics on the internet
“Mr. Smith and Goliath”: On government intervention in tech
“Thought Plutocrats”: On tycoons and access to ideas
“The Facebook Files”: On the revelations of Frances Haugen
“The Writer of the Future”: On writing and tech
Book Post is a by-subscription book review delivery service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriberto support our work and receive our straight-to-you book posts. Some Book Post writers: Adrian Nicole LeBlanc, Jamaica Kincaid, Marina Warner, Lawrence Jackson, John Banville, Álvaro Enrigue, Nicholson Baker, Kim Ghattis, Michael Robbins, more.
Community Bookstore and its sibling Terrace Books, in Park Slope and Windsor Terrace, Brooklyn, are Book Post’s Winter 2024 partner bookstores. Read our profile of them here. We partner with independent booksellers to link to their books, support their work, and bring you news of local book life across the land. We send a free three-month subscription to any reader who spends more than $100 at our partner bookstore during our partnership. To claim your subscription send your receipt to info@bookpostusa.com.
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
You know, there really isn't any good reason to be on Facebook or Twitter at all....
Well researched post. I cancelled my Facebook account last week. I posted my reasoning on my web site.
https://vinemaple.net/