The County Election (1852), George Caleb Bingham, St. Louis Museum of Art (gift of Bank of America). According to the museum’s caption, an Irish immigrant is taking an oath that he had not voted elsewhere, a Revolutionary War “76-er” veteran descends the steps after voting, and an inebriated citizen is dragged to the ballot box. Two boys on the ground play mumblety peg, a knife game that progressively increases in risk.
Arguments were heard in two cases before the Supreme Court in recent weeks that addressed the urgent question of what government’s role should be in protecting and/or moderating speech online—questions that have become more pressing as we near a presidential election and as language generated by artificial intelligence becomes increasingly prevalent on the internet. Murthy v. Missouri, heard on Monday, takes up court decisions in Missouri and Louisiana that prohibit any government communication with social media platforms about their moderation decisions—advice or requests about what to put up or take down. Questioning by the justices seemed to indicate that they leaned in the direction of seeing interaction between government officials and social media platforms as continuous with a long and constitutionally protected history of government communication with the press, so long as these officials do not exercise coercion (see this post by internet law scholar Daphne Keller for a useful breakdown of the precedents). As two of the justices themselves who had served in government (Kavanagh and Kagan) averred, such exchanges happen “literally thousands of times a day” and often involve conveying useful information like public health advisories and voter registration information (internet law scholar Kate Klonick had a witty review of the arguments). The legitimacy of such interventions became a highly visible cause when newly installed (then-)Twitter CEO Elon Musk opened his companies’ records to a group of selected journalists to disclose what Musk regarded as smoking-gun evidence of government meddling in the former ownership’s moderation decisions (“The Twitter Files,” Part One, Part Two). As Jim Rutenberg and Steven Lee Myers uncovered in The New York Times last weekend, that initiative had deep roots in controversy around the platforms’ response to election denialism in 2020, the vigorous fanning of which has led to a widespread reluctance on the part of the social media companies today to intervene in instances of alleged misinformation and a dismantling of academic efforts to study and to report on misinformation online.
At the end of February the court heard arguments in two cases brought by the tech industry advocacy group NetChoice against new laws in Florida and Virginia requiring tech platforms to justify content moderation decisions outside a very narrow band (is it pornography? does it incite violence?). The platforms and many advocacy groups that promote more vigorous moderation against disinformation and hate speech online argued that the platforms have a First Amendment right to make editorial decisions as private companies (see Adam Serwer in The Atlantic on the history of arguments for a “right to post”). Analogously with the Murthy case, the justices’ questioning, which drew on comparisons as diverse as newspapers, telephone lines, shopping malls, Etsy, and (🎉) bookstores, seemed to indicate that the justices saw platforms’ latitude to moderate content as falling within private businesses’ constitutionally protected right to exercise editorial judgment. One outraged politician’s ideologically motivated removal of XYZ opinion is another corporate executive’s self-interested business decision to run a platform attractive to controversy-averse advertisers. (The platforms’ algorithms, at it happens, have been shown to favor conservative political messaging although laws like Florida and Virginia’s were designed to curtail a perceived liberal bias in tech.)
At the same time, both states and the federal government have been wrestling with whether and how to intervene to protect children online. The Florida legislature passed a bill on February 22, which was vetoed by Governor Ron DeSantis and revised and passed again on March 6, that enacted some of the most prohibitive measures in the country against minors’ use of social media. (Similar laws have passed in Utah, Arkansas, and Ohio, and been found unconstitutional in Arkansas and Ohio.) Most social media platforms require in their terms of service that users be at least thirteen. The legislation addresses requiring proof of age (which would violate the internet’s promise of user anonymity) and other measures like parental consent, curfews, and additional content moderation for would-be users between the ages of thirteen and eighteen. Although there is broad agreement that children need to be protected on the internet, no particular measure seems to meet with the tech industry’s nor civil libertarians’ approval. Governor DeSantis’s objection involved concerns around parental rights and anonymity. At the end of February bipartisan federal legislation (the “Kids Online Safety Act”) creating a “duty of care” for tech platforms, enforceable by the Federal Trade Commission, to mitigate certain dangers to users under the age of seventeen and allow them to opt out of algorithm-based recommendations, moved toward introduction in the US Senate. As fiery hearings about child safety with tech executives in January showed, this is a rare issue to have bipartisan support, and yet no significant tech legislation has managed to make it through Congress since 1998 and the prospects here don’t seem much different.
These skirmishes are taking place amidst a broad realignment of government intervention in technology. In Europe a comprehensive legislation called the Digital Markets Act went into effect on March 6, instituting an escalating series of fines and interventions, potentially culminating in divestiture, for tech companies that are found to be “self dealing” or giving their own products a competitive advantage within their systems. Apple was fined $2 billion by European regulators on March 4 for allegations that it shut out music platforms like Spotify, and its share price fell 3.1 percent in the US. A parallel European law called the Digital Services Act compels “risk assessments” into the possible harms of material on a platform and audits of companies’ responses. The European Union adopted sweeping regulation of artificial intelligence in December. Opponents of such legislation in the tech industry argue that Europe’s heavier legislative hand has stifled innovation, but tech-friendly journalists like Kara Swisher and Casey Newton argue that that freeing up competition from a few dominant players would encourage growth and that Europe’s legislation is setting a standard that is being usefully adapted in other countries and helping internationally to introduce necessary “guardrails” to tech.
Although hope for meaningful federal tech legislation remains faint, a newly active Federal Trade Commission and Department of Justice have been busy. The FTC brought an antitrust suit against Amazon in December; an appeals court ruled last week that the FTC can reopen its privacy case against Meta (Facebook, Instagram, and Threads); a Department of Justice suit against Google for rigging its markets was announced in December; and DOJ antitrust charges against Apple were announced on Thursday. Offering audiences more choice would allay some of the urgency around the big platforms’ now near-unilateral moderation decisions. As Casey Newton said of the European legislation, if “ten years from now there’s going to be five major search engines and six major smartphone operating systems and eleven major e-commerce platforms around the world, to me, that would be the ideal: that we distribute the balance of power much more broadly across companies, across regions. It doesn’t feel like the fate of humanity is in the hands of five companies.”
Relatedly, this month the US House of Representatives startled everyone by suddenly advancing bipartisan legislation to force the Chinese technology company ByteDance to divest itself of TikTok or see the app banned from the US. Although there have been free speech arguments against such a measure, and claims that it would be a gift to dominant American platforms, embolden authoritarian censors abroad, and alienate younger voters, many seasoned observers including The New York Times’s Kevin Roose and Kara Swisher, author of the much-seen-lately tech memoir, Burn Book, argue that TikTok has not done enough to assuage concerns that it operates as a surveillance and propaganda tool. Casey Newton has written that it “seems ridiculous that we have rules around foreign ownership of broadcast media in this country but not digital media, where the bulk of political discourse arguably now takes place.” Energy around the proposal seems to have flagged though, in part due to vacillating instructions to GOP lawmakers from former President Trump.
Meanwhile in artificial intelligence, the new technology that threatens to upend our way of life … [Read Part Two of this post here]
Postscript: On Friday Kevin Roose had a story in The New York Times about how Reddit salvaged its reputation, to the point of an IPO last week, with emphatic moderation.
Ann Kjellberg is the founding editor of Book Post.
Book Post is a by-subscription book review delivery service, bringing snack-sized book reviews by distinguished and engaging writers direct to our paying subscribers’ in-boxes, as well as free posts like this one from time to time to those who follow us. We aspire to grow a shared reading life in a divided world. Become a paying subscriber to support our work and receive our straight-to-you book posts. Recent reviews: Sarah Ruden on Marilynne Robinson’s Genesis; John Banville on a new book on Emerson by James Marcus; Yasmine El Rashidi on Ghaith Abdul-Ahad’s personal history of the Iraq War.
Square Books in Oxford, Mississippi, is Book Post’s Winter 2023 partner bookstore! We partner with independent bookstores to link to their books, support their work, and bring you news of local book life across the land. We’ll send a free three-month subscription to any reader who spends more than $100 with our partner bookstore during our partnership. Send your receipt to info@bookpostusa.com.
Follow us: Instagram, Facebook, TikTok, Notes, Bluesky, Threads @bookpostusa
If you liked this piece, please share and tell the author with a “like.”
Not ready for a paid subscription to Book Post? Show your appreciation with a tip.
Ann, thanks for this helpful and balanced overview of recent developments.