Against Content Moderation
Published by Reblogs - Credits in Posts,
I joined the internet writing platform Substack a year ago. My poor laptop was overloaded with creative writing that had no obvious path to publication and I was going through a period of political re-examination and wanted to be able to, as it were, think out loud.
It turned out to be a far better decision than I could have expected. Not only could I post whatever I wanted—without needing to negotiate with editors or pay a submission fee to a literary magazine and get a form rejection in response—but there was a genuine community. It was common for writers to post paragraphs-long comments on other people’s work, and an air of thoughtfulness and civility prevailed that was rare in the era of quick-twitch social media.
So I was very surprised to discover, from an Atlantic articleof 23 November 2023, that "Substack Has A Nazi Problem." The piece was written by Jonathan M. Katz, who was at that time himself a Substacker, and who—after rooting around in Substack’s darker corners and finding 16 newsletters that contained "overt Nazi symbols"—declared that, "just beneath the surface, [Substack] has become a home and propagator of white supremacy and anti-Semitism."
Substack Has a Nazi Problem
The newsletter platform’s lax content moderation creates an opening for white nationalists eager to get their message out.
The AtlanticJonathan M. Katz
Katz’s piece triggered much handwringing within the Substack community. An open letter to Substack’s co-founders, Chris Best, Hamish McKenzie, and Jairaj Sethi, signed by 247 "Substackers Against Nazis," demanded to know why Substack was "platforming and monetizing Nazis" and asked them not to put their "thumb on the scale" by promoting or monetizing Nazis or white supremacists.
The problem is that, as far as I can tell, the Substack leadership isn’t actually doing anything to promote the far-right. The platform is simply upholding the free speech principles outlined in its Content Guidelines. These stipulate a very narrow set of circumstances under which they may censor or prohibit content: for example, the publication of personal details without permission and content that "incite[s] violence based on protected classes." The guidelines of this US-based company follow a standard closely aligned with reigning Supreme Court interpretation of the American First Amendment. There is objectionable content on Substack, as Katz discovered, but its authors tend to skirt well clear of directly "inciting violence." So, in practice, what the Substackers Against Nazis are advocating for are modified Terms of Use that allow for more stringent content moderation.
Censorship-Free Social Media: the Next Big Thing, or Just Another Echo Chamber?
One of the forefathers of the modern internet, John Gilmore, famously remarked that the net interprets censorship as damage and routes around it.
QuilletteMatthew Mott
That strikes me as a severe overreaction to a very mild problem. The existence of 16 newsletters featuring Nazi iconography is unfortunate—but, as Katz himself admits, this is "a tiny fraction" of the site’s 17,000 paid newsletters. The popular former Substacker Casey Newton has submitted a shorter list of accounts believed to be in violation of the "incitement to violence" standard. According to Substack management, these constituted a mere six newsletters with a combined total of twenty-nine paying subscribers. Nevertheless, Newton announced on 12 January that he has left the platform because management failed to update their terms and conditions to include language indicating that "they would regard explicitly Nazi and pro-Holocaust material to be a violation of their existing policies."
On 22 December 2023, co-founder Hamish McKenzie doubled down on the platform’s stance on free speech, writing, "We are committed to upholding and protecting freedom of expression, even when it hurts." In response, some users left the platform. One of the émigrés was Talia Lavin, a former New Yorker fact-checker, who explained, "We’ve left Substack behind, after its founders stated, in no uncertain terms, that they’re not just OK with, but in principle supportive of, having loads of out-and-out Nazis on their platform."
The spat was focused on a mere handful of publications, some of which—like one newsletter presupposing Jewish control of the stock market—get no traffic at all. But the fracas is probably best understood as a skirmish in a much larger war about online content moderation, which is, as law professor Evelyn Douek has written, "a wicked problem with unenviable and perhaps impossible trade-offs." It is also a problem that the mainstream consensus has gotten completely wrong.
In their early days, most of the social media platforms were proudly, avowedly laissez-faire about content, in line with the American free speech tradition. Law professor Jonathan Zittrain calls this "the Rights Era" of online governance.
That era ended in the mid to late 2010s. The conventional wisdom is that it collapsed once the internet reached a sort of critical mass and revealed its true nature as a social destabilizer and disseminator of hate speech and misinformation.
In a 6,000-word 2017 post dubbed "the Mark Manifesto," Facebook’s CEO Mark Zuckerberg wrote, "As we build a global community, this is a moment of truth … It’s about whether we’re building a community that helps keep us safe—that prevents harm, helps during crises, and rebuilds afterwards." Later that year, Twitter’s CEO Jack Dorsey issued a similar mea culpa for the over-permissiveness of the Rights Era: "We prioritized [safety] in 2016. We updated our policies and increased the size of our teams. It wasn’t enough. … In 2017 we made it our priority and made a lot of progress … We decided to take a more aggressive stance in our rules and how we enforce them."
Read Mark Zuckerberg’s full 6,000-word letter on Facebook’s global ambitions
Grab a cup of coffee and take a seat.
VoxKurt Wagner
The theory underlying this about-face is that the internet represents a radically new mode of communication. "A commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse," wrote Monika Bickert, Facebook’s Vice President for Global Policy Management, in 2019. A 2023 paper in the Journal of Communication by Emile de Keulenaar and others concludes, "As platforms succeed in becoming global social structures, the First Amendment-based pretense of impartiality grew unsustainable."
That view seemed to be borne out by the sheer scale of what De Keulenaar and colleagues refer to as "ugly content." In late 2020, Facebook "took action" on 1.1 million pieces of content per day, while YouTube daily removed 100,000 videos. In the first half of 2020 alone, Twitter investigated reports against 12.4 million unique accounts for potential violations of the platform’s rules. A new industry was born, of content moderators—most of them low-paid and working from off-shore sites as they sifted through the very worst of the web. A 2019 Washington Post profile by Elizabeth Dwoskin, Jeanne Whalen, and Regine Cabato depicts them as "silent heroes of the Internet, protecting Americans from the ills of their own society."
Free Speech Doesn’t Protect Nazis. It Protects Us From Nazis
Free speech advocates don’t defend the speech rights of Nazis because they believe that Nazis have anything valuable to contribute to a marketplace of ideas.
QuilletteDaniel Friedman
People began to use new analogies to describe the web. Zittrain describes, at some point in the late 2010s, a shift from a focus on rights to a "public health" model, in which certain content—"misinformation" and "disinformation" in particular—was perceived as a type of contagion. This, too, soon became mainstream opinion.
In a 2019 New York Times op-ed, Brittan Heller, an attorney for the Anti-Defamation League, went so far as to write: "The idea that platforms like Twitter, Facebook and Instagram should remove hate speech is relatively uncontroversial." For Douek, the new moderation regime represented "a more mature approach" to speech governance than the First Amendment-infused philosophy that had prevailed in the earlier part of the decade.
The shift to the public health model was widely presented as a necessary response to the volume of hate and misinformation online. But, reading through the press reports and academic literature, it’s possible to see it in different terms: as a panic.
De Keulenaar et al. write that, "The loss of tolerance for hateful and abusive content seems to respond to a number of events on the ground … suggest[ing] a certain causal connection between online content and offline events." The events to which they allude were, above all, the 2016 election of Donald Trump and the 2017 Charlottesville "Unite the Right" rally. As De Keulenaar et al. explain, the goal of content moderation became "not necessarily adjudicating content as more or less acceptable, but moderating it on the basis of evolving and ever contingent public conceptions of accountability."
In other words, the more stringent rules weren’t primarily a response to the changing nature of internet traffic itself—it wasn’t that the internet had suddenly shown itself to be less civilized than expected. Instead, the platforms were responding to pressure by "journalists, activists, and politicians" to address the rise of the far-right. As De Keulenaar et al. conclude: "Moderation is more a political than a moral art."
As the "public health" model took hold, the moderation regimes found themselves making rules that seem absurd in their Byzantine complexity and arbitrariness. Facebook’s 27-page Community Standards document, made public in 2018, prohibits, for example, images of "fully nude close-ups of buttocks unless photoshopped on a public figure," and permits the statement "migrants are so filthy," while forbidding the comparison "Irish are the best, but … French suck."
Even Douek, who considers the new moderation regime to be "salutary," concedes that the platforms are "largely just ‘making rules up’" and that "it is not just hard to get content right at this scale; it is impossible."
The armies of the "silent heroes of Internet" have since grown dramatically. By 2023, Meta (the company that owns Facebook) had hired around 40,000 people to work on safety and security, including some 15,000 content moderators. In 2020, Facebook paid out $52 million in compensation to content moderators who claimed that they suffered from PTSD after watching so much disturbing material. Most social media companies have since increasingly relied on AI moderation, which Facebook’s current help page describes as "central to our content review process."
During the pandemic, the social media giants even, as Douek notes, "removed tweets of world leaders" if their views contradicted those of "authoritative" sources like the World Health Organization. In October 2020, Twitter broadened its already flexible definition of harm "to address content that goes directly against guidance from authoritative sources of global and local public health information." But, as Douek acknowledges, "this apparently exceptional content moderation during the pandemic was only a more exaggerated version of how content moderation works all the time."
The Twitter Files, released in 2022–23, provided a vivid illustration of the extent to which moderation—or, in Twitter’s preferred term of art, "visibility filtering"—allowed the platform to "build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users," as Bari Weiss reported. "We control visibility quite a bit … normal people do not know how much we do," one Twitter engineer told Weiss and the Twitter Files journalists. The Twitter Files also demonstrated that the social media company was routinely working with government agencies in an effort to control public narratives: a backdoor obstruction of First Amendment rights. As Elon Musk put it, "The degree to which Twitter was simply an arm of the government was not well understood by the public."
1. Thread: THE TWITTER FILES
— Matt Taibbi (@mtaibbi) December 2, 2022
THREAD: THE TWITTER FILES PART TWO.
TWITTER’S SECRET BLACKLISTS.
— Bari Weiss (@bariweiss) December 9, 2022
The response of some in the legacy media was to assert that the Twitter Files had not revealed much. For example, in a New York Times op-ed, Twitter’s former head of Trust and Safety Yoel Roth cited a tech journalist who asserted that there "was absolutely nothing of interest" in the leaked material. Roth’s claim indicates how quickly standards had shifted. Twitter had adopted a completely different philosophical framework for content moderation without ever having informed the public or Congress that First Amendment principles had now been "outgrown."
Whatever one makes of the ethics of Twitter’s moderation regime, on a practical level, it backfired. Elon Musk’s text messages to Jack Dorsey and others, released as part of a court discovery process during his acquisition of the site, suggest that free speech concerns were a central motivator of Musk’s purchase. Musk’s ally Mathias Döpfner, CEO of Axel Springer, for example, advised him to, "Make Twitter the global backbone of free speech, an open market place of ideas that truly complies with the spirit of the first amendment." By the time of Musk’s takeover in 2022, the "public health" model of content moderation had revealed its many limitations: it is arbitrary, biased, draconian, expensive, and ineffective, and probably even increases polarization.
Substack was launched in 2017, just when the "public health" model was gaining widespread favor, but its leadership chose to chart a different course. In a 2020 statement on the site’s moderation policies, the company’s three founders wrote, "We … disagree with those who would seek to tightly constrain the bounds of acceptable discourse. We think the principles of free speech can not only survive the internet, but that they can help us survive as a society that now must live with all the good and bad that the internet brings."
There are two assumptions behind Substack’s moderation policy. One is that First Amendment principles still provide excellent guidelines for discourse. The second is that the well-documented problems that Facebook and Twitter have been facing have to do more with their business models than with problems resulting from too much free speech.
The social media giants, the Substack leadership argued, have deep, in-built flaws: the arbitrary maximum character counts on Twitter, the use of ads and algorithms, the incentivization of outrage and of quick-twitch dopamine hits. At one point in the "Mark Manifesto" of 2017, Zuckerberg himself wonders if the problem with his platform isn’t so much the prevalence of "misinformation" as the inherent sensationalism that seems to drive so much social media traffic:
The harm is that sensationalism moves people away from balanced nuanced opinions towards polarized extremes.
If this continues and we lose common understanding, then even if we eliminated all misinformation, people would just emphasize different sets of facts to fit their polarized opinions. That’s why I’m so worried about sensationalism in media.
Substack’s leadership agreed. They contended that allowing human users—rather than algorithms—to control the platform would produce a more civil discourse and obviate the need for heavy-handed moderation. As Hamish McKenzie told the November 2023 Novitate conference:
Substack is a totally different system [from Twitter or Facebook] and much more like a home to an island of independent publications that people can actively opt into or opt out of. You invite that publication into your inbox, that’s a much more controlled space. It turns a temperature down on discourse. It’s less reactive, it’s much more of a thoughtful space where people have to defend themselves at length and they’re not just looking for a quick drive-by dunk.
Substack’s moderation philosophy has been harshly critiqued, however. In March 2021, the writer Jude Ellison Doyle announced his departure from the platform, claiming that it "had morphed into an online haven for transphobia." In January 2022, the Center for Countering Digital Hate condemned the fact that, "Substack does not explicitly prohibit users from spreading misinformation or warn them against spreading conspiracy theories about vaccines or Covid." And in April 2023, Nilay Patel, editor-in-chief of the technology magazine The Verge, told Substack CEO Chris Best: "You should just say no [to allowing overt racism on Substack] … And I’m wondering what’s keeping you from saying no."
For Patel—as for writers for The Washington Post, The Guardian and The New York Times—it was axiomatic that platforms must moderate content, or be considered complicit in spreading noxious viewpoints. But Substack already had a moderation policy, with deep roots in First Amendment jurisprudence. "We question the default assumption that aggressive content moderation is the answer to the problems it is supposed to solve," McKenzie wrote in a Substack note of 22 April 2023, in response to Patel.
Substack’s situation is somewhat complicated by the fact that its leadership actively participate in writing and commenting on their own platform and recommend specific content both on the site itself and in subscriber newsletters. For Casey Newton, "The moment a platform begins to recommend content is the moment it can no longer claim to be simple software"—implying that it must then shoulder the responsibilities of a publisher. But Substack’s leaders are usually quite clear as to when they are speaking as individuals and when they are speaking as representatives of the company.
Substack’s argument has been that it is necessary to prohibit only those types of speech that hamper our ability to have a free conversation. It is for this reason that Substack prohibits spam and incitements to violence. (Pornography is not permitted either, though "erotic literature" is.) But aggressive content moderation, which censors specific beliefs and views, is inherently resistant to balanced application. A model that prioritizes free speech rights, or that is largely based on the First Amendment, can more easily achieve even-handedness.
Novelist and podcaster Walter Kirn has argued that the controversy over the Nazi Substackers is motivated by jealousy, not genuine concerns about fascism. Kirn has compared Substack to a "new drug dealer" who has just moved into a neighborhood, prompting the "mafia kingpins"—the legacy media—to do everything they can think of to restore control. "That’s really all that happened. It’s that simple," Kirn has alleged.
There may be truth to this. Substack’s business model is very different from those of the legacy journalism and publishing industries. On Substack, writers can earn money directly from their readers without going through an intermediary. This has clearly reduced their dependence on traditional publications, as Ben Smith acknowledges in an article that The New York Times editors have fittingly titled "Why We’re Freaking Out About Substack."
Driving the controversy over the tiny number of Nazi newsletters is a far broader battle of ideas. Many heterodox writers—some of whom have been canceled by mainstream media—now have large followings on Substack. They include Alex Berenson, Robert Malone, Matt Taibbi, Bari Weiss, Glenn Greenwald, Andrew Sullivan, Aaron Maté, and Chris Hedges. For Kirn, "this episode in the evolution of the media would be [called] The Empire Strikes Back" and features the legacy outlets attempting to root out the heretics at their secret Substack base.
It’s an arresting metaphor, but there are more sober ways to put it. In his 2014 book The Revolt of the Public, the CIA analyst-turned-cultural-commentator Martin Gurri discusses the schism between "the old industrial scheme" and, in the new forms of online exchange, an "uncertain dispensation striving to become manifest," which he calls "the public." For Gurri, the two schema have such widely different assumptions about what kinds of discourse are acceptable as to create an insuperable divide that has become the defining feature of our era. "The conflict is so asymmetrical that it seems impossible for the two sides to actually engage but they do engage and the battlefield is everywhere," Gurri writes.
After the 2016 election, the representatives of "the old industrial scheme" decided that the public couldn’t be trusted. Many blamed Facebook and Twitter for the spread of right-wing ideology and Russian disinformation and demanded that they install new guardrails—a task that the social media platforms took to with alacrity, even though it meant jettisoning the old First Amendment-inflected approach. But amid all the recent calls for a kinder, gentler internet, few people have realized that it might already be here.
Substack represents the internet at its best. While the social media platforms have amped up their content moderation and attempted to exert tighter control over speech, Substack has employed a simpler approach: giving users the tools to have a web presence and then assuming that they are grown-up enough to make their own decisions on what content they wish to post or see. It would be a shame if, in the panic over a handful of extremist newsletters, we lost sight of that underlying principle.
https://quillette.com/2024/02/02/the-case-against-content-moderation/