The County Election (1852), George Caleb Bingham, St. Louis Museum of Art (gift of Bank of America). According to the museum’s caption, an Irish immigrant is taking an oath that he had not voted elsewhere, a Revolutionary War “76-er” veteran descends the steps after voting, and an inebriated citizen is dragged to the ballot box. Two boys on the ground play mumblety peg, a knife game that progressively increases in risk.
Arguments were heard in two cases before the Supreme Court in recent weeks that addressed the urgent question of what government’s role should be in protecting and/or moderating speech online—questions that have become more pressing as we near a presidential election and as language generated by artificial intelligence becomes increasingly prevalent on the internet. Murthy v. Missouri, heard on Monday, takes up court decisions in Missouri and Louisiana that prohibit any government communication with social media platforms about their moderation decisions—advice or requests about what to put up or take down. Questioning by the justices seemed to indicate that they leaned in the direction of seeing interaction between government officials and social media platforms as continuous with a long and constitutionally protected history of government communication with the press, so long as these officials do not exercise coercion (see this post by internet law scholar Daphne Keller for a useful breakdown of the precedents). As two of the justices themselves who had served in government (Kavanagh and Kagan) averred, such exchanges happen “literally thousands of times a day” and often involve conveying useful information like public health advisories and voter registration information (internet law scholar Kate Klonick had a witty review of the arguments). The legitimacy of such interventions became a highly visible cause when newly installed (then-)Twitter CEO Elon Musk opened his companies’ records to a group of selected journalists to disclose what Musk regarded as smoking-gun evidence of government meddling in the former ownership’s moderation decisions (“The Twitter Files,” Part One, Part Two). As Jim Rutenberg and Steven Lee Myers uncovered in The New York Times last weekend, that initiative had deep roots in controversy around the platforms’ response to election denialism in 2020, the vigorous fanning of which has led to a widespread reluctance on the part of the social media companies today to intervene in instances of alleged misinformation and a dismantling of academic efforts to study and to report on misinformation online.
Keep reading with a 7-day free trial
Subscribe to Book Post to keep reading this post and get 7 days of free access to the full post archives.