Substack Needs to Decide If It’s Special or Not
Social MediaMuch has been written about Substack’s Nazi problem and it’s subsequent messaging that it’s OK with Nazis to the point that it will not remove any of them unless they are inciting specific violence.
Our content guidelines do have narrowly defined proscriptions, including a clause that prohibits incitements to violence. We will continue to actively enforce those rules while offering tools that let readers curate their own experiences and opt in to their preferred communities. Beyond that, we will stick to our decentralized approach to content moderation, which gives power to readers and writers.
It’s a familiar stance from a tech founder, but it still amounts to saying “I’m OK with that person doing crazy terrible things in my front lawn, but please don’t think I’m endorsing it…I’m just allowing it to happen on my property and helping them raise funds.”
Casey Newton at Platformer discussed this recently, and added:
[Substack co-founder Hamish] McKenzie’s perspective – that sunlight is the best disinfectant, and that censorship backfires by making dangerous ideas seem more appealing – is reasonable for many or even most circumstances. It is a point of view that informs policies at many younger, smaller tech platforms, owing both to the techno-libertarian streak that runs through many founders in Silicon Valley and the fact that a hands-off approach to content moderation is easier and less expensive than the alternatives.
I haven’t met McKenzie, but if I did, as someone in the industry with, I think, a similar product-focused mind, I would put it to him like this:
If you contend that kicking Nazis off your platform means that they will just produce the content somewhere else as successfully, are you saying that Nazis, or any random loathsome group, could build a platform as large as important as Substack? If so, is your product really not all that special? If you disagree, and I think you might, and believe that Substack is special in terms of reach and importance, then that is why you should remove voices that have near universal disapproval. Substack, as an important platform, can deplatform these people and while they will surely go somewhere else, it won’t have the reach of monetization of Substack. How is that a bad thing? How is that making anything worse? This isn’t a hot topic and this isn’t a hard decision and Substack isn’t the government. I’m guessing that you wouldn’t allow a Nazi to hang out in the Substack lobby and sell t-shirts, but I struggle to see how the current situation on the Substack platform is different.
Related Posts
AI Hype Comes Back Down to Earth
AI’s hype has been out of control for a long time now. Truly. It’s a shame too, because there’s interesting and useful functionality here. This isn’t the crypto boom which only birthed NFTs and accelerated ransomware, there are real benefits here, but none of this was even close to “AGI” (or even real “AI”) so there was a segment of the technology industry that sat and waited for days like the last few to come.
Read moreHacking the STLToday Paywall v2025
Welcome back to our continuing series of breaking new STLToday paywall versions. Want to hear something crazy? We’ve been doing this for over 10 years now! Or to put it another way, STLToday.com, the website for the St. Louis Post Dispatch, has been using a paywall for over a decade, and they haven’t been able to make one that isn’t cracked in a matter of minutes.
Read moreMy Current AI Workflow
Sometimes, after rebutting someone’s extreme exuberance over AI or cracking a joke, I feel like people may assume that I don’t like AI or I never use it. That’s not the case at all! I do use it and I integrate it (or advocate for its use) in my CTO role at Ten2 and when consulting but I do take care to use it in situations that make sense, such as text generation, and not places where it doesn’t make sense, such as decision-making.
Read more