fbpx
A faded poster showing Donald Trump and Alex Jones of Infowars on a street in Tbilisi, Georgia, February 2, 2018.

The delights and the dangers of deplatforming extremists

The crux of the problem with deplatforming: when it’s good, it’s excellent; and when it’s bad, it’s dangerous.

“Deplatforming works” has, in recent months, become a popular slogan on social media. When a widely reviled public figure is booted from a social media platform or a television channel, Twitter users repeat the phrase as a truism. And, indeed, there is evidence to support the claim that taking away someone’s digital megaphone can effectively silence them, or significantly reduce their influence.

After Twitter and Facebook permanently banned Donald Trump in January, for example, there was a noticeable and quantifiable drop in online disinformation. In 2016 Twitter took the then-unprecedented step of banning Milo Yiannopoulos, a notorious provocateur and grifter who disseminated hate speech and disinformation. Yiannopoulos tried vainly to mount a comeback, but never recovered from the loss of his bully pulpit. It appears his 15 minutes of fame are well over.

Alex Jones, the prominent conspiracy theorist and Infowars founder, was booted from multiple platforms in 2018 for violating rules against hate speech, among other things. Jones disseminated disgusting conspiracy theories like the claim that the Sandy Hook massacre was a hoax perpetrated to curtail gun rights, thus re-victimizing the parents of children who had been shot and killed at the Connecticut elementary school. His rants spawned fresh conspiracies about other mass shootings, like the one at the Marjory Stoneman Douglas High School in Parkland, which he said was staged by “crisis actors.” Jones boasted that banning him from mainstream platforms would only make him stronger. “The more I’m persecuted, the stronger I get,” he said. But three years later, his name has almost disappeared from the news cycle.

Experts on online hate speech, misinformation, and extremism agree that kicking extremist haters off platforms like Facebook and YouTube significantly limits their reach.

According to one recent study, “far right content creators” who were kicked off YouTube found they were unable to maintain their large audience on BitChute, an alternative video platform that caters to extremists. Another study found that a far-right user who is deplatformed simultaneously by several mainstream social media platforms rapidly loses followers and influence. In other words, toxic influencers who are forced off mainstream social media do have the option of migrating to secret platforms that specialize in hosting extremists, but if they are not on YouTube they will be starved of new targets to radicalize and recruit.

The removal of a Yiannopoulos or a Jones from the quasi-public sphere  can be a huge relief to the people they target. However, I am not convinced that censorship is an effective tactic for social change. Nor do I believe that it is in our best interests to entrust social media corporations with the power to moderate our discourse.

The negative effects of deplatforming have not been studied as thoroughly as the positive effects—which is not surprising, given that the phenomenon is only a few years old. But there are a few clear possibilities, like the creation of cult-like followings driven by a sense of persecution, information vacuums, and the proliferation of “underground” organizing—such as the organized harassment campaigns that are organized by “incel” (involuntarily celibate) communities on sites like 4Chan and then taken to more central platforms like Twitter.

Substack, the subscription newsletter platform, now hosts several “deplatformed” people who are thriving, like “gender critical” activist and TV writer Glen Linehan (who was kicked off Twitter for harassing transgender people), or Bari Weiss, the self-proclaimed “silenced” journalist who claimed in her public resignation letter from The New York Times that her colleagues had created a work environment that was hostile to her. Substack allows the author to set the terms for their newsletter by deciding on the subscription price, and whether they’d like the company to assign them an editor. The company has also been clear about its views on content moderation, with which I largely agree: free speech is encouraged, with minimal content moderation. My concern is that newsletters facilitate the creation of a cult following, while giving writers with a persecution complex a place to join forces in a self-congratulatory, circular way.

Of course, even Substack has its limits: I doubt that the platform would be happy to host Alex Jones or Donald Trump.

Deplatforming can also have a damaging impact on fragile democracies.

In early June Nigerian president Muhammadu Buhari issued a threat, via his Twitter account, that he would punish secessionists in the Biafra region. Twitter decided the threat violated its policies and removed the tweet. In response, the Nigerian government blocked access to the social media company indefinitely and said those who circumvented the ban would be subject to prosecution—a situation that is, as of this writing, ongoing—although the government says it will restore access “in a few days.” Nigerian businesses are suffering from the ban, while those who do find a way to tweet risk arrest. This is a salutary example that illustrates how a social media company’s ostensibly righteous decision to censor world leaders can backfire.

The first time I heard the term “deplatforming,” it was used to describe student-led boycotts of guest speakers invited to campus. The mediator in these situations is the university administration, which responds to the demands of enrolled, tuition-paying students—who should have the ultimate say in who comes to speak at their university. But social media platforms are large multinational corporations. As I argue in my recent book, making corporations the gatekeepers for acceptable expression is deeply problematic.

In cases when the social media platform acts as an intermediary between external forces and an individual, the resulting scenario can resemble mob rule.

Chris Boutté, who runs a YouTube channel about mental health issues called “The Rewired Soul,” experienced the mob rule scenario firsthand. Boutté references pop culture in his videos about mental health and addiction, in which he talks about his own experience, often using illustrative examples from the world of YouTube influencers. He attracted angry detractors who believed he was causing harm by speculating about the mental health of popular YouTube stars. In an effort to silence Boutté, his critics attacked him in their own videos, which ultimately resulted in his receiving death threats.

“Everything I did was from a good place,” he told me during a recent conversation. “In their mind, I was so dangerous that I should not be able to speak. So that’s where my concerns with deplatforming come in, when you get a mob mentality [combined with] misinformation.” He added: “I’m not a big fan of the court of public opinion.” Boutté says that his angry critics’ efforts to get him deplatformed included “dislike bomb” campaigns, whereby users mass-dislike videos in an effort to trick the YouTube algorithm. According to Boutté, the tactic worked: His channel is no longer financially viable.

Mobs who take matters into their own hands, manipulating recommendation algorithms to get someone removed from a platform, have been around for a long time. In recent years, however, they have become more sophisticated; meanwhile, the public’s understanding of how platforms work has increased.

According to one recent Vice report there is a cottage industry of professional scammers who exploit Instagram’s policies to get individuals banned by making fraudulent claims against them. Want to get someone kicked off Instagram? Pay a professional to report them (falsely) for using a fake identity on their profile. Anyone can be targeted by these tactics. Repressive governments, for example, target the Facebook accounts of journalists, democracy activists and marginalized communities worldwide.

So here is the crux of the problem with deplatforming: when it’s good, it’s excellent; and when it’s bad, it’s dangerous. Deftly removing noxious propagandists is good.  Empowering ordinary people to silence a common “enemy” by manipulating an algorithm is not good. Silencing marginalized activists fighting repressive governments is very, very bad.

Finally: Is censorship really a meaningful strategy for social change? Surely the most effective means of routing hate speech is to tackle its root causes rather than hacking at its symptoms. The study of online misinformation and extremism are currently hot topics, the darlings of funders in the digital space, with millions of dollars doled out to academic institutions. Certainly, online hate speech is an important area of study, but the intense focus on this one issue can come at the expense of other urgent social issues—like online privacy, the declining right to free expression worldwide, and the ongoing struggles against repressive governments.

I suggest that deplatforming should be viewed and wielded with extreme caution, rather than presented as a means of fixing the internet—or, more importantly, our societies.