Everyone can agree that tech companies should clamp down on bots, paid trolls, and assorted other bad actors who parade on social media as democratically engaged citizens but who use these platforms to pursue a hidden agenda. But what about bona fide, bottom-up revolutionary political talk? What should social media companies do when ordinary people come together on their platforms to discuss rebellion against the established order?
And on a less dramatic but related note, what should tech companies do about the internal political organizing and social justice debates that are tearing apart their workplaces and eating up productivity?
When people want to talk about changing the world — really changing it in ways that others will find threatening for various reasons — in what digital contexts should they be allowed to do that? On Facebook? In their company’s public Slack channel?
As usual with this newsletter, I don’t have easy answers for you, but I am here to exhaust you with my questions and to take up space like a majestic sea lion.
The Facebook post-mortem
Facebook has a problem with revolutionaries, and a newly leaked document confirms that they know it. And you should know it, and should think about it, because if you’re any flavor of civil libertarian (as I am) the questions this raises are kind of thorny. This is especially true for those of us, like myself, who have strongly pro-Second Amendment leanings.
Buzzfeed got hold of an internal Facebook post-mortem for the January 6th capitol insurrection and has published the contents.
There’s a lot in this report, but there’s one part of it in particular that I believe matters a great deal: the discussion of “authentic harm.”
We have little policy around coordinated authentic harm. While some of the admins had VICN ties or were recidivist accounts, the majority of the admins were “authentic.” StS and PP were not directly mobilizing offline harm, nor were they directly promoting militarization. Instead, they were amplifying and normalizing misinformation and violent hare in a way that delegitimized a free and fair democratic election. The harm existed at the network level: an individual’s speech is protected, but as a movement, it normalized delegitimization and hate in a way that resulted in offline harm and harm to the norms underpinning democracy.
What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy? What do we do when that authentic movement espouses hate or delegitimizes free elections? These are some of the questions we’re trying to answer through research and tool building in the Disaggregating Harmful Networks Taskforce, and that we’re wrestling through in the Adversarial Harmful Networks policy xfn.
In the text above, “authentic” is shorthand for something like, “not a scammer” — i.e. not a Russian bot, or another agitator with a submerged agenda that’s separate from what they’re ostensibly posting about.
They use this language of “authentic” because all the company’s internal shenanigan-suppression apparatus has been focused on rooting out fakery. This anti-flimflam factor is what unites the different parts of every social media platform’s “trust and safety” efforts — SPAM, malware, disinformation, fake news, fraud, etc. This “no faking it” approach works well for platforms because everyone can agree that fakery is bad and should be stopped.
But such clear-cut policies quickly run into trouble when not everyone can agree on what’s fake and what isn’t. Like, was the election really stolen, or is that a bunch of lies? Now, before you hit the “unsubscribe” button, know that I’m asking if the 2016 election was stolen, not the 2020 one. Unless you swing the other way, in which case I mean the election in Venezuela. Or something.
The problem of placing tech platforms in the position of sorting the real from the fake is bad enough, but the above post-mortem excerpt raises an even thornier problem. Specifically, it seems that most of the users Facebook found who made up the backbone of Stop the Steal (StS) were probably honest-to-goodness, grassroots citizens who truly believed they were acting in the best interests of the republic.
So Facebook is on the horns of a dilemma: what do you do about authentic, bottom-up, democratic activity aimed at coordinating mass action that has a high likelihood of violence and/or is aimed at insurrection or revolution?
Maybe the users in question are not even openly discussing violence or speaking about it in code, but the clear implication of them all getting what they want is a lot of violence and mayhem.
Now, before you unsubscribe, know that I’m referring the police abolition movement, and about how if they get their way a lot of vulnerable people will die at the hands of violent criminals. Unless you swing the other way, in which case I mean Stop the Steal.
I guess the deeper question is, to what extent should a for-profit tech company permit platforms it controls to be used to coordinate activity that threatens the very status quo it benefits from?
Or, as the Facebook document puts it, what do you do when the “harm” is not in any one individual’s speech, but in the aggregate, network effect of all that speech in the outside world.
The (Coin)Basecamp Memo
No more societal and political discussions on our company Basecamp account. Today’s social and political waters are especially choppy. Sensitivities are at 11, and every discussion remotely related to politics, advocacy, or society at large quickly spins away from pleasant. You shouldn’t have to wonder if staying out of it means you’re complicit, or wading into it means you’re a target. These are difficult enough waters to navigate in life, but significantly more so at work. It’s become too much. It’s a major distraction. It saps our energy, and redirects our dialog towards dark places. It’s not healthy, it hasn’t served us well. And we’re done with it on our company Basecamp account where the work happens. People can take the conversations with willing co-workers to Signal, Whatsapp, or even a personal Basecamp account, but it can’t happen where the work happens anymore.
Basecamp founder and rockstar ruby coder David Heinemeier Hansson followed up in a blog post:
Basecamp should be a place where employees can come to work with colleagues of all backgrounds and political convictions without having to deal with heavy political or societal debates unconnected to that work...
We also like to tell ourselves that having these discussions with the whole company is “healthy”. I used to think that too, but I no longer do. I think it’s become ever more stressful, unnerving, and counterproductive. No comment thread on Basecamp is going to close the gap on fundamental philosophical and political differences. And we’re left worse for wear when we try.
Therefore, we’re asking everyone, including Jason and me, to refrain from using our company Basecamp or HEY to discuss societal politics at work effective immediately.
The only internal employee reaction I’m aware of so far is this very weird, awkward podcast episode where the two speakers are clearly upset about the move:
For context on this whole controversy, Hansson is pretty vocally progressive on Twitter — some would even say he’s “woke.” So the fact that they’ve gone and done this, with the full knowledge of how much backlash the (far less woke or even woke-adjacent) Coinbase suffered after their memo, suggests that things are pretty bad internally.
But of course, not only did Coinbase move on from the controversy and the tidal wave of bad press, but they did so via a whopper of a $100 billion dollar IPO that has sent a very powerful signal to everyone paying attention: the Twitter outrage machine’s bark is worse than its bite.
If I’m the Basecamp bosses, and I’m sick to death of a toxic, politically charged internal work environment, then that Coinbase IPO was all the green light I needed to shed a bunch of activist programmers and refocus on the bottom line.
But to back up again to the deeper question, I think it’s the exact same as in the case of Facebook, i.e., to what extent should a for-profit tech company permit platforms it controls to be used to coordinate activity that threatens the very status quo it benefits from?
Or, what do you do when the problem isn’t any one individual’s speech, but in the aggregate, network effect that such speech is having on the company?
I get that “political neutrality” is itself a political stance that is de facto on the side of the status quo. But do we really want private, for-profit companies — companies that were founded and built to execute a pretty narrow, profit-driven mission — to deliberately seek to shape society in ways that are far afield of the mission they’ve proven themselves to be good at?
Doesn’t that strike you as a weird sort of quasi-libertarian dystopian vision, where private corporations are the vehicles for important social reform?