Another Way to (Not) Burn a Book
Writers are skeptical about content moderation on Substack. Can the Santa Clara Principles help rebuild their trust?
When the American author Ray Bradbury penned his famous line, “There is more than one way to burn a book. And the world is full of people running around with lit matches,” the words could not have rung truer.
Published in 1953, Fahrenheit 451 found its way onto America’s bookshelves amid a dark and contentious period of history. Following on the heels of the atrocities and chaos of the Second World War, McCarthyism and the Second Red Scare were now in full swing; the possibility of nuclear war had become a reality; and Hollywood actors and directors lived in fear of being blacklisted by their own government for their political leanings, whether perceived or real. The United States was in the midst of a fear-driven, paranoid moral panic — one that seemed to weigh heaviest on the nation’s writers, artists, and thinkers.
Seventy-one years later, the temperature at which paper burns is no longer relevant in a world where the exchange of ideas happens almost exclusively online. But in yet another divisive era of American politics, amid an ongoing crusade for censorship on both the right and left, the specter of a third Red Scare still looms large in the public psyche.
In the wake of the controversy surrounding revelations of Nazi and similar far-right content on Substack, an American tech company, reported by Jonathan M. Katz in The Atlantic last November, the age-old tension between safety and freedom of expression has left a new wave of writers perplexed and divided. Caught between wanting a safe place online for their fellow writers and craving a platform free from the political censorship that has crept onto other platforms in recent years, many have wondered: is there a way to balance both?
THE LOOSE ENDS OF FREE SPEECH
In the U.S., public debate over free speech has raged since at least 1791, when the First Amendment was ratified. Recognizing the political harms created by speech limitations in Europe at the time, the Bill of Rights sought to limit the power of the new American government and ensure a more democratic process that better represented the will of the people.
Despite those early efforts, free expression in the U.S. — and its legal limits — has remained a matter of contention ever since, with legal battles over what can and cannot be said roaring through the nation’s legal system for over a century. The ongoing debate resulting from Katz’s Atlantic article is a stark reminder that the free speech debate is far from over in America, even in 2024.
On Monday, in response to public outcry over the Atlantic article, Substack’s co-founders told Platformer, a tech newsletter hosted on the platform, that Substack would enforce its existing policies by terminating the accounts of “several publications that endorse Nazi ideology” recently identified by the company and “will continue to remove any material that includes ‘credible threats of physical harm.’”
For many writers on Substack, the announcement came as welcome news — after all, true threats are not protected speech, even under the U.S. Constitution. And because the platform already moderates other types of speech, like spam and pornography, many users expected that other violations of its existing content guidelines would be similarly moderated.
Still, for writers on either side of the speech issue, the wording of Substack’s statement felt uncomfortably vague, leaving some wondering what position, if any, the company actually took on content moderation — and whether other kinds of nonviolent speech could be regulated in the future.
A HISTORY OF SILENCE
The history of the Internet is proof that concerns about vagueness when it comes to content moderation are valid — and there are plenty of real-world examples to help explain why writers on Substack are rightly worried.
Last year, reporting by journalists Bari Weiss, Matt Taibbi, Michael Shellenberger, Lee Fang, David Zweig, and Alex Berenson, dubbed the “Twitter Files,” confirmed dubious content moderation practices under former leadership at Twitter that raised questions about secret blacklists and the role of federal agents in silencing accounts on the platform — including satire and nonviolent speech.
Recent lawsuits have also accused the U.S. government of having a hidden hand in selectively silencing writers and journalists whose work criticized their policies, with Berenson v. Biden and Missouri v. Biden both alleging the White House quietly placed its finger on the scale of social media content moderation in the last several years.
But abuses of power in content moderation are nothing new in the digital world — and they aren’t limited to state actors. Sometimes, individual moderators’ politics, prejudices, and ideologies can also play a role.
According to a blog post on Removed News by Robert Hawkins, the founder of Reveddit, a website that allows Reddit users to see which of their posts have been censored on the platform in real-time, shadow moderation — that is, the secret silencing of voices online by rendering their accounts invisible to other users — has been in use since the 1980s. Hawkins writes, “[Shadow moderation] is so common that I now believe most social media users have probably been moderated at some point or another without their knowledge.”
A LIVED EXPERIENCE
In July 2023, Hawkins alerted me to the fact that my own Substack profile was invisible to other users — a fact I was able to confirm through communications with Substack staff as well as logging out and attempting to view my account in a private window.
It is unclear how any of the contents of my dry, organic, very occasional blog posts triggered the platform’s spam filters even as actual violent content did not. Still, the incident highlighted a serious risk stemming from the platform’s lack of transparency: if it could happen to me, it could happen to anyone. Accidental or not, secret content moderation can be detrimental for any writer hoping to build a following or make a living online.
Surprisingly, my experience with Substack was just one of several similar experiences last year. In December, after creating an account on TikTok to document the process of restarting my 2018 photojournalism project called The Neighbors, for which I photograph everyday people and interview them about their ideas for reuniting our nation’s divided society, I discovered that my account’s content was invisible when logged out. The platform gave no explanation other than claiming I had somehow “violated its terms.”
The experience was jarring. TikTok, a social media platform that has drawn the ire of the U.S. government over its ties to the Chinese government, appeared to have actively silenced a photography project centered around social unity in America — my photography project.
Because of the platform’s formal appeals process (something Substack currently lacks), my account was quickly restored after a review by another moderator. Still, the situation raised a serious hypothetical question: what would happen if a moderator had a secret, divisive political agenda that even the platform they worked for wasn’t aware of? And worse — what if there was no way to track or appeal their removals?
REBUILDING ON SOLID GROUND
While the concept of censorship dates back to at least ancient Rome, it is clearly far from antiquated in 2024 — and writers are right to feel anxious about any tech platform that refuses to be transparent about the fairness of its content moderation practices, including Substack.
During my work as a journalist, particularly when covering consumer tech in the past, I’ve had the opportunity to interview digital civil liberties experts advocating for better ways to increase transparency and improve content moderation at tech companies.
One of those strategies involves convincing social media platforms to adopt the Santa Clara Principles on Transparency and Accountability in Content Moderation. Established in 2018 by heavy hitters in the digital civil liberties space, including the Electronic Frontier Foundation, Access Now, Article 19, two chapters of the ACLU, and more, the principles offer a blueprint for tech platforms to balance safety and freedom in reasonable ways that align with human rights.
Spelling out a series of five foundational principles and implementation recommendations that include transparency, due process, clearly defined policies, limitations on state involvement, cultural competency, and integrity requirements, the Santa Clara Principles seek to create an online experience for users that is fair, transparent, and equitable.
In 2020, in response to feedback about the inequitable application of moderation and algorithmic tools, the Santa Clara Principles were updated to include standards directed at state actors and expand the recommended removal notification and appeals processes.
A FOCUS ON FAIRNESS … AND ACCOUNTABILITY
In a digital world where information can be altered or erased with a single click — and in a nation where majorities of both major parties believe it is “at least somewhat likely social media sites censor political views they find objectionable,” according to a 2020 poll from Pew Research — transparency, fairness, and accountability are the bedrock of public trust.
Although the argument has been made in recent years that speech can be a form of violence, there is an argument to be made for the idea that censorship is also a form of violence — particularly when it silences speech that does not seek to physically harm others. Marginalized groups, journalists, and activists whose voices have been quietly hushed by powerful forces throughout history would surely agree.
As Substack moves into a new era of increased content moderation without explicitly defined content restrictions, third-party oversight, or a notification or formal appeals process for removed content, adopting the Santa Clara Principles could help rebuild trust with users who are right to question the company’s lack of transparency and the potential for future biased abuses of power — particularly in an industry where money sometimes speaks louder than ethics.
By explicitly defining content that is prohibited on Substack — and providing writers with takedown notifications and the right to appeal if they believe moderators have removed their content unfairly or in bad faith — the platform can provide a safe space online where violence and other threats are not tolerated while ensuring writers from all backgrounds and belief systems are treated equally.
As Jillian C. York, Director for International Freedom of Expression at the Electronic Frontier Foundation, wrote on the digital civil liberties organization's blog in November 2020: “With more moderation inevitably comes more mistakes; the Santa Clara Principles are a crucial step toward addressing and mitigating those mistakes fairly.”
Collage via Substack image generator.
Great and timely post.
Great and timely post.
Cool illustration as well.