Social media is either suppressing politics or running wild with it; academic study of misinformation is shut down; election interference is rising. Is moderation possible in the internet we have?
Thanks, Ann. Lots to chew on here. I have real reservations about content moderation on social media platforms. From a First Amendment perspective, private companies are certainly entitled to exercise content moderation. And there is plenty of harmful content. But I might nevertheless prefer that they refrain, because I lack confidence that they will do it in a manner that is impartial. And I think those efforts will therefore only further undermine confidence in "the system" and will increase cynicism. It is, however, a tough call, and I'm not very sure about any of my own opinions in this area, if for no other reason than that I'm actually just not very interested in social media, so I probably don't pay enough attention to it to be well-informed!
There’s no such thing as impartiality. That’s a liberal myth. There’s no reason to allow people to say completely racist shit on a public platform except for some weird fetishization of an empty notion of “freedom of speech.”
Perhaps it is possible to say though that finding the point at which “racist shit” verges into an opinion with which one could meaningfully contend is not always straightforward, and if one wants to participate in a reading environment that is not only a recapitulation of one’s own views it’s a worthwhile effort to try to arrive at articulable standards that can face in multiple directions.
I so appreciate your commenting Peter! I was aware in this post that I was kind of cutting corners. I have talked before about moderation and disinformation and I was telegraphing a bit and not being very careful in the distinctions. Of course there is definitely gray territory between disinformation and difference of opinion. I meant for this one to be about the closure of academic study of disinformation and how dangerous content spreads on the internet, but it bled out into other things. The central point there is the concern that these large companies that dominate the economy and our communications infrastructure are private and keep their data proprietary. Without their consent there is no way to study how they work, which is so consequential for society; they are disincentivized from sharing information that might alienate advertisers and harm their business. There is plenty of evidence that they *can* benefit financially from practices that cause social harm, and how to contain these practices is something we leave them to decide for themselves in the black box of their engagement algorithms. A classic example is the spread on Facebook of conspiracy theories about the Rohingya in Myanmar that contributed to their being massacred and expelled. (Frances Haugen testified that Facebook’s safety features get weaker outside the English language.) The social media companies do plenty of moderating as it is—of child pornography, for instance, and black marketeering. There’s evidence though that a lot of illegal stuff gets through. And Casey Newton had a landmark story on the horrible experiences of the humans who do this moderating (https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona). Should Facebook take down a post that gives people false information about voting in their area in order to suppress the vote? Or gives out information about how to break into a vote counting facility? Facebook actually froze me out of my Book Post page three months ago—and I am a paying business customer—and I have yet to get any response to my request for support—problems like what these small-town election officials are having when they report falsified posts of election information. There’s a serious question about where to draw the line, on the one hand regarding content but on the other regarding consequences: what should be against the law, what should be openly disclosed so consumers and advertisers can make their own decisions, what should be amplified algorithmically, what are appropriate standards within businesses, what kinds of disclosure do these companies owe the public/their customers? I do appreciate your reminding me/us that there is plenty of room for concern within this about recognizing freedom of inquiry and creating pluralistic forums for public discussion.
Thanks, Ann. Lots to chew on here. I have real reservations about content moderation on social media platforms. From a First Amendment perspective, private companies are certainly entitled to exercise content moderation. And there is plenty of harmful content. But I might nevertheless prefer that they refrain, because I lack confidence that they will do it in a manner that is impartial. And I think those efforts will therefore only further undermine confidence in "the system" and will increase cynicism. It is, however, a tough call, and I'm not very sure about any of my own opinions in this area, if for no other reason than that I'm actually just not very interested in social media, so I probably don't pay enough attention to it to be well-informed!
There’s no such thing as impartiality. That’s a liberal myth. There’s no reason to allow people to say completely racist shit on a public platform except for some weird fetishization of an empty notion of “freedom of speech.”
Perhaps it is possible to say though that finding the point at which “racist shit” verges into an opinion with which one could meaningfully contend is not always straightforward, and if one wants to participate in a reading environment that is not only a recapitulation of one’s own views it’s a worthwhile effort to try to arrive at articulable standards that can face in multiple directions.
Yes, but one can say that without invoking the shibboleth of freedom of speech or appealing to some "impartial" standard.
I so appreciate your commenting Peter! I was aware in this post that I was kind of cutting corners. I have talked before about moderation and disinformation and I was telegraphing a bit and not being very careful in the distinctions. Of course there is definitely gray territory between disinformation and difference of opinion. I meant for this one to be about the closure of academic study of disinformation and how dangerous content spreads on the internet, but it bled out into other things. The central point there is the concern that these large companies that dominate the economy and our communications infrastructure are private and keep their data proprietary. Without their consent there is no way to study how they work, which is so consequential for society; they are disincentivized from sharing information that might alienate advertisers and harm their business. There is plenty of evidence that they *can* benefit financially from practices that cause social harm, and how to contain these practices is something we leave them to decide for themselves in the black box of their engagement algorithms. A classic example is the spread on Facebook of conspiracy theories about the Rohingya in Myanmar that contributed to their being massacred and expelled. (Frances Haugen testified that Facebook’s safety features get weaker outside the English language.) The social media companies do plenty of moderating as it is—of child pornography, for instance, and black marketeering. There’s evidence though that a lot of illegal stuff gets through. And Casey Newton had a landmark story on the horrible experiences of the humans who do this moderating (https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona). Should Facebook take down a post that gives people false information about voting in their area in order to suppress the vote? Or gives out information about how to break into a vote counting facility? Facebook actually froze me out of my Book Post page three months ago—and I am a paying business customer—and I have yet to get any response to my request for support—problems like what these small-town election officials are having when they report falsified posts of election information. There’s a serious question about where to draw the line, on the one hand regarding content but on the other regarding consequences: what should be against the law, what should be openly disclosed so consumers and advertisers can make their own decisions, what should be amplified algorithmically, what are appropriate standards within businesses, what kinds of disclosure do these companies owe the public/their customers? I do appreciate your reminding me/us that there is plenty of room for concern within this about recognizing freedom of inquiry and creating pluralistic forums for public discussion.
First-rate reporting and analysis, as always, but I sense the lights going out more rapidly than I would have thought. Scary.
I so appreciate your comments on these posts Jean but also feel sad for worrying you! We still have Henry James and Dr. Johnson!
Yes, we still have James, whom I've been reading, and any study of history shows that we were always in the soup. Love to you.