WP_Post Object
(
[ID] => 3231
[post_author] => 2
[post_date] => 2021-10-01 02:30:52
[post_date_gmt] => 2021-10-01 02:30:52
[post_content] => The crux of the problem with deplatforming: when it’s good, it’s excellent; and when it’s bad, it’s dangerous.
“Deplatforming works” has, in recent months, become a popular slogan on social media. When a widely reviled public figure is booted from a social media platform or a television channel, Twitter users repeat the phrase as a truism. And, indeed, there is evidence to support the claim that taking away someone’s digital megaphone can effectively silence them, or significantly reduce their influence.
After Twitter and Facebook permanently banned Donald Trump in January, for example, there was a noticeable and quantifiable drop in online disinformation. In 2016 Twitter took the then-unprecedented step of banning Milo Yiannopoulos, a notorious provocateur and grifter who disseminated hate speech and disinformation. Yiannopoulos tried vainly to mount a comeback, but never recovered from the loss of his bully pulpit. It appears his 15 minutes of fame are well over.
Alex Jones, the prominent conspiracy theorist and Infowars founder, was booted from multiple platforms in 2018 for violating rules against hate speech, among other things. Jones disseminated disgusting conspiracy theories like the claim that the Sandy Hook massacre was a hoax perpetrated to curtail gun rights, thus re-victimizing the parents of children who had been shot and killed at the Connecticut elementary school. His rants spawned fresh conspiracies about other mass shootings, like the one at the Marjory Stoneman Douglas High School in Parkland, which he said was staged by “crisis actors.” Jones boasted that banning him from mainstream platforms would only make him stronger. “The more I’m persecuted, the stronger I get,” he said. But three years later, his name has almost disappeared from the news cycle.
Experts on online hate speech, misinformation, and extremism agree that kicking extremist haters off platforms like Facebook and YouTube significantly limits their reach.
According to one recent study, “far right content creators” who were kicked off YouTube found they were unable to maintain their large audience on BitChute, an alternative video platform that caters to extremists. Another study found that a far-right user who is deplatformed simultaneously by several mainstream social media platforms rapidly loses followers and influence. In other words, toxic influencers who are forced off mainstream social media do have the option of migrating to secret platforms that specialize in hosting extremists, but if they are not on YouTube they will be starved of new targets to radicalize and recruit.
The removal of a Yiannopoulos or a Jones from the quasi-public sphere can be a huge relief to the people they target. However, I am not convinced that censorship is an effective tactic for social change. Nor do I believe that it is in our best interests to entrust social media corporations with the power to moderate our discourse.
The negative effects of deplatforming have not been studied as thoroughly as the positive effects—which is not surprising, given that the phenomenon is only a few years old. But there are a few clear possibilities, like the creation of cult-like followings driven by a sense of persecution, information vacuums, and the proliferation of “underground” organizing—such as the organized harassment campaigns that are organized by “incel” (involuntarily celibate) communities on sites like 4Chan and then taken to more central platforms like Twitter.
Substack, the subscription newsletter platform, now hosts several “deplatformed” people who are thriving, like “gender critical” activist and TV writer Glen Linehan (who was kicked off Twitter for harassing transgender people), or Bari Weiss, the self-proclaimed “silenced” journalist who claimed in her public resignation letter from The New York Times that her colleagues had created a work environment that was hostile to her. Substack allows the author to set the terms for their newsletter by deciding on the subscription price, and whether they’d like the company to assign them an editor. The company has also been clear about its views on content moderation, with which I largely agree: free speech is encouraged, with minimal content moderation. My concern is that newsletters facilitate the creation of a cult following, while giving writers with a persecution complex a place to join forces in a self-congratulatory, circular way.
Of course, even Substack has its limits: I doubt that the platform would be happy to host Alex Jones or Donald Trump.
Deplatforming can also have a damaging impact on fragile democracies.
In early June Nigerian president Muhammadu Buhari issued a threat, via his Twitter account, that he would punish secessionists in the Biafra region. Twitter decided the threat violated its policies and removed the tweet. In response, the Nigerian government blocked access to the social media company indefinitely and said those who circumvented the ban would be subject to prosecution—a situation that is, as of this writing, ongoing—although the government says it will restore access “in a few days.” Nigerian businesses are suffering from the ban, while those who do find a way to tweet risk arrest. This is a salutary example that illustrates how a social media company’s ostensibly righteous decision to censor world leaders can backfire.
The first time I heard the term “deplatforming,” it was used to describe student-led boycotts of guest speakers invited to campus. The mediator in these situations is the university administration, which responds to the demands of enrolled, tuition-paying students—who should have the ultimate say in who comes to speak at their university. But social media platforms are large multinational corporations. As I argue in my recent book, making corporations the gatekeepers for acceptable expression is deeply problematic.
In cases when the social media platform acts as an intermediary between external forces and an individual, the resulting scenario can resemble mob rule.
Chris Boutté, who runs a YouTube channel about mental health issues called “The Rewired Soul,” experienced the mob rule scenario firsthand. Boutté references pop culture in his videos about mental health and addiction, in which he talks about his own experience, often using illustrative examples from the world of YouTube influencers. He attracted angry detractors who believed he was causing harm by speculating about the mental health of popular YouTube stars. In an effort to silence Boutté, his critics attacked him in their own videos, which ultimately resulted in his receiving death threats.
“Everything I did was from a good place,” he told me during a recent conversation. “In their mind, I was so dangerous that I should not be able to speak. So that’s where my concerns with deplatforming come in, when you get a mob mentality [combined with] misinformation.” He added: “I’m not a big fan of the court of public opinion.” Boutté says that his angry critics’ efforts to get him deplatformed included “dislike bomb” campaigns, whereby users mass-dislike videos in an effort to trick the YouTube algorithm. According to Boutté, the tactic worked: His channel is no longer financially viable.
Mobs who take matters into their own hands, manipulating recommendation algorithms to get someone removed from a platform, have been around for a long time. In recent years, however, they have become more sophisticated; meanwhile, the public’s understanding of how platforms work has increased.
According to one recent Vice report there is a cottage industry of professional scammers who exploit Instagram’s policies to get individuals banned by making fraudulent claims against them. Want to get someone kicked off Instagram? Pay a professional to report them (falsely) for using a fake identity on their profile. Anyone can be targeted by these tactics. Repressive governments, for example, target the Facebook accounts of journalists, democracy activists and marginalized communities worldwide.
So here is the crux of the problem with deplatforming: when it’s good, it’s excellent; and when it’s bad, it’s dangerous. Deftly removing noxious propagandists is good. Empowering ordinary people to silence a common “enemy” by manipulating an algorithm is not good. Silencing marginalized activists fighting repressive governments is very, very bad.
Finally: Is censorship really a meaningful strategy for social change? Surely the most effective means of routing hate speech is to tackle its root causes rather than hacking at its symptoms. The study of online misinformation and extremism are currently hot topics, the darlings of funders in the digital space, with millions of dollars doled out to academic institutions. Certainly, online hate speech is an important area of study, but the intense focus on this one issue can come at the expense of other urgent social issues—like online privacy, the declining right to free expression worldwide, and the ongoing struggles against repressive governments.
I suggest that deplatforming should be viewed and wielded with extreme caution, rather than presented as a means of fixing the internet—or, more importantly, our societies.
[post_title] => The delights and the dangers of deplatforming extremists
[post_excerpt] => The negative effects of deplatforming have not been studied as thoroughly as the positive effects—which is not surprising, given that the phenomenon is only a few years old. But there are several case studies that illustrate the risks of kicking extremists off mainstream platforms.
[post_status] => publish
[comment_status] => closed
[ping_status] => open
[post_password] =>
[post_name] => the-delights-and-the-dangers-of-deplatforming-extremists
[to_ping] =>
[pinged] =>
[post_modified] => 2024-08-28 21:15:12
[post_modified_gmt] => 2024-08-28 21:15:12
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://conversationalist.org/?p=3231
[menu_order] => 174
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
)



Hazal Sipahi, host of the podcast "Mental Klitoris."[/caption]
When she was a child growing up in provincial Turkey, Sipahi said, sexuality was only discussed in whispers; but as soon as she could speak English, she found an ocean of sexuality content available on the internet.
“I searched for information online and found it, only because I was curious,” she said. “I also learned many false things on the internet, and they were very hard to correct later on.”
For example, Sipahi explained, “For so long, we thought that the hymen was a literal veil like a membrane.” In Turkey there is a widespread belief that once the hymen is “deformed,” a woman’s femininity is damaged, and she somehow becomes less valuable as a future spouse.
“Mental Klitoris” is both Sipahi’s public service and her means of self-expression. She uses her podcast to correct misunderstandings and disinformation, to go beyond censorship and to translate new terminology into Turkish.
“I really wish I had been able to access this kind of information when I was around 14 or 15,” she said.
More than 45,000 people listen to Mental Klitoris, which provides them with access to crucial information in their native tongue. They learn terms like “stealthing,” “pegging,” “abortion,” “consent,” “vulva,” “menstruation,” and “slut-shaming.” Sipahi covers all these topics on her podcast; she says she’s adding important new vocabulary to the Turkish vernacular.
She’s also adding a liberal voice to the ongoing discussion about feminism, “Which became even stronger in Turkey after #MeToo.” She believes her program will lead to a wave of similar content in Turkey.
“This will go beyond podcasts,” she said. “We will have a sexual opening overall on the internet.”
Inspired by contemporary creatives like Lena Dunham (“Girls”), Michaela Coel (“I Might Detroy You”),
Tuluğ Özlü[/caption]
Asked to describe how she feels when she crosses the barriers created by widely shared social taboos about human sexuality, Özlü, who lives in Istanbul’s hip
Rayka Kumru[/caption]
Kumru said one of the current barriers to freedom in Turkey was the lack of access to comprehensive sexuality education, information and skills such as sex-positivity, critical thinking around values and diversity, and communication about consent. She circumvents that barrier by informing her viewers and listeners about them directly.
“Once connections and a collaborations are established between policy, education, and [particularly sexual] health, and when access to education and to shame-free, culturally specific, scientific, and empowering skills training are allowed, we see that these barriers are removed,” Kumru explains. Otherwise, she says, the same myths and taboos continue to play out, making misinformation, disinformation, taboos, and shame ever-more toxic.
Şükran Moral[/caption]
When it comes to female sexuality, Moral said, Turkey’s art scene is still conservative. “There’s self-censorship among not only creators, but also viewers and buyers, so it’s a vicious cycle.”
Part being an artist, particularly one who challenges the position of women, she said, is seeing a reaction to her work. “When art isn’t displayed,” she asked, “how do you get people to talk about taboos?”
Turkish academia also suffers from a censorship of sex studies.
Dr. Asli Carkoglu, a professor of psychology at Kadir Has University, said it was not easy finding a precise translation for the English word “intimacy” in Turkish.
“There’s the word ‘mahrem,’” she said, but that term has religious connotations.
The difficulty in interpretation, she explains, illustrates the problem: In Turkey, intimacy has not been normalized.
President Recep Tayyip Erdogan and his conservative Justice and Development Party (AKP) have many times expressed support for gender-based segregation and a conservative lifestyle that protects their interpretation of Muslim values.
Erdogan, who has has been in power since 2003, has his own ways of promoting those values.
“At least three children,” has long been the slogan of Erdogan’s population campaign, as the president implores married couples to expand their families and increase Turkey’s population of 82 million.
“For the government, sex means children, population,” Dr. Carkoglu explained.
Dr. Carkoglu believes that sex education should be left to the family, but “when the government acts as though sexuality is nonexistent, the family doesn’t discuss it. It’s the chicken-and-egg dilemma,” she said.
So, how do you overcome a taboo as deep-rooted as sexuality in Turkey? Carkoglu believes that that the topic will have to be normalized through conversations between friends.
“That’s where the taboo starts to break,” she said. “Speaking with friends [about sexuality] becomes normal, speaking in public becomes normal, and then the system adapts.”
But for many Turks, speaking about sexuality is very difficult.
Berkant, 40, has made a living selling sex toys at his shop in the city of Adana, in southern Turkey, for the past two decades. But he said that he’s still too embarrassed to go up to a cashier in another store and say he wants to buy a condom.
“It doesn’t feel right,” he said, adding he doesn’t want to make the cashier uncomfortable.
He is seated comfortably at his desk as we speak; behind him, a wide selection of vibrators are arrayed on shelves.
Berkant and his older brother own one of three erotica shops in Adana. Most of their customers are lower middle class; one-third are female. “Many of them are government workers who come after hearing about us from a friend,” he said.
The shopkeeper said female customers phone in advance to check whether the shop is “available,” meaning empty.
He said he often refers women who describe certain complaints to a gynecologist.
“I see countless women who are barely aware of their own bodies,” he said.
Dr. Doğan Şahin, a psychiatrist and sexual therapist, said that the information women in Turkey hear when they are growing up has a lot to do with their avoidance of discussions about sex, even when the subject concerns their health.
[caption id="attachment_2971" align="aligncenter" width="1600"]
Advertisement for men's underwear in Izmir, Turkey.[/caption]
Men don’t really care whether the woman is aroused, willing or having an orgasm, he said. Unless the problem is due to pain, or vaginismus, couples rarely head to a therapist, he adds.
“[Women who grew up hearing false myths] tend to take sexuality as something bad happening to their bodies, and so, they unintentionally shut their vaginas, leading to vaginismus. This is actually a defense method,” he told The Conversationalist.
“They fear dying, they fear becoming a lower quality woman, or that sex is their duty.”
While most Turkish women find out about their sexual needs after getting married, the doctor says that, based on research he completed about 10 years ago, men tend to fall for myths about sexuality by watching pornography, which plants unrealistic fantasies about sex in their minds.
“Sexuality is also presented as criminal or banned in [Turkish] television shows. The shows take sexuality to be part of cheating, damaging passions or crimes instead of part of a normal, healthy, and happy life.”
He recommends that couples talk about sexuality and normalize it. Talking is crucial, and so is the language used in those conversations.
Bahar Aldanmaz, a Turkish sociologist studying for her PhD at Boston University, told The Conversationalist why talking about menstruation matters.
“A woman’s period is unfortunately seen as something to be ashamed of, something to be hidden,” she said. (According to Turkey’s language authority, the word “dirty” also means “a woman having her period.”)
“There are many children who can’t share their menstruation experience, or can’t even understand they are having their periods, or who experience this with fear and trauma.”
And this is what builds a wall of taboo around this essential issue, the professor says. It is one of the issues her non-profit organization “We Need To Talk” aims to accomplish, among other problems related to menstruation, such as period poverty and period stigma.
Female hygiene products are taxed as much as 18 percent—the same ratio as diamonds, said Ms. Aldanmaz. She adds that this is what mainly causes inequality—privileged access to basic health goods, the consequence of the roles imposed by Turkish social mores.
“Despite declining income due to the COVID-19 pandemic, there is a serious increase in the pricing of hygiene pads and tampons. This worsens period poverty,” Aldanmaz says. She offers Scotland as an example of what would like to see in Turkey: free sanitary products for all.
During Turkey’s government-imposed lockdown in May 2021, several photos showing tampons and pads in the non-essential sales part of markets stirred heated debates around the subject, but neither the Ministry of Family and Social Services nor the Health Ministry weighed in.
“We are fighting this shaming culture in Turkey,” Aldanmaz says, “by understanding and talking about it.”
[post_title] => Sexually aware and on air: Beyond Turkey's comfort zone
[post_excerpt] => Turkish podcasts that host frank conversations about sexuality are smashing taboos and filling information vacuums.
[post_status] => publish
[comment_status] => closed
[ping_status] => open
[post_password] =>
[post_name] => sexually-aware-and-on-air-beyond-turkeys-comfort-zone
[to_ping] =>
[pinged] =>
[post_modified] => 2024-08-28 21:14:02
[post_modified_gmt] => 2024-08-28 21:14:02
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://conversationalist.org/?p=2949
[menu_order] => 187
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
)



During the decade prior to the 2011 uprising, Egypt saw a blogging boom, with people from diverse socio-economic backgrounds writing outspoken commentary about social and political issues, even though they ran the risk of arrest and imprisonment for criticizing the state. The internet provided space for discussions that had previously been restricted to private gatherings; it also enabled cross-national dialogue throughout the region, between bloggers who shared a common language. Public protests weren’t unheard of—in fact, as those I interviewed for the book argued, they had been building up slowly over time—but they were sporadic and lacked mass support.
While some bloggers and social media users chose to publish under their own names, others were justifiably concerned for their safety. And so, the creators of “We Are All Khaled Saeed” chose to manage the Facebook page using pseudonyms.
Facebook, however, has always had a policy that forbids the use of “fake names,” predicated on the misguided belief that people behave with more civility when using their “real” identity. Mark Zuckerberg famously claimed that having more than one identity represents a lack of integrity, thus demonstrating a profound lack of imagination and considerable ignorance. Not only had Zuckerberg never considered why a person of integrity who lived in an oppressive authoritarian state might fear revealing their identity, but he had clearly never explored the rich history of anonymous and pseudonymous publishing.
In November 2010, just before Egypt’s parliamentary elections and a planned anti-regime demonstration, Facebook, acting on a tip that its owners were using fake names, removed the “We are all Khaled Saeed” page.
At this point I had been writing and communicating for some time with Facebook staff about the problematic nature of the policy banning anonymous users. It was Thanksgiving weekend in the U.S., where I lived at the time, but a group of activists scrambled to contact Facebook to see if there was anything they could do. To their credit, the company offered a creative solution: If the Egyptian activists could find an administrator who was willing to use their real name, the page would be restored.
They did so, and the page went on to call for what became the January 25 revolution.
A few months later, I joined the Electronic Frontier Foundation and began to work full-time in advocacy, which gave my criticisms more weight and enabled me to communicate more directly with policymakers at various tech companies.
Three years later, while driving across the United States with my mother and writing a piece about social media and the Egyptian revolution, I turned on the hotel television one night and saw on the news that police in Ferguson, Missouri had shot an 18-year-old Black man, Michael Brown, sparking protests that drew a disproportionate militarized response.
The parallels between Egypt and the United States struck me even then, but only in 2016 did I become fully aware. That summer, a police officer in Minnesota pulled over 32-year-old Philando Castile—a Black man—at a traffic stop and, as he reached for his license and registration, fatally shot him five times at close range.
Castile’s partner, Diamond Reynolds, was in the passenger’s seat and had the presence of mind to whip out her phone in the immediate aftermath, streaming her exchange with the police officer on Facebook Live.
Almost immediately, Facebook removed the video. The company later restored it, citing a “technical glitch,” but the incident demonstrated the power that technology companies—accountable to no one but their shareholders and driven by profit motives—have over our expression.
The internet brought about a fundamental shift in the way we communicate and relate to one another, but its commercialization has laid bare the limits of existing systems of governance. In the years following these incidents, content moderation and the systems surrounding it became almost a singular obsession. I worked to document the experiences of social media users, collaborated with numerous individuals, and learned about the structural limitations to changing the system.
Over the years, my views on the relationship between free speech and tech have evolved. Once I believed that companies should play no role in governing our speech, but later I shifted to pragmatism, seeking ways to mitigate the harm of their decisions and enforce limits on their power.
But while the parameters of the problem and its potential solutions grew clearer, so did my thesis: Content moderation— specifically, the uneven enforcement of already-inconsistent policies—disproportionately impacts marginalized communities and exacerbates existing structural power balances. Offline repression is, as it turns out, replicated online.
The 2016 election of Donald Trump to the U.S. presidency brought the issue of content moderation to the fore; suddenly, the terms of the debate shifted. Conservatives in the United States claimed they were unjustly singled out by Big Tech and the media amplified those claims—much to my chagrin, since they were not borne out by data. At the same time, the rise of right-wing extremism, disinformation, and harassment—such as the spread of the QAnon conspiracy and wildly inaccurate information about vaccines—on social media led me to doubt some of my earlier conclusions about the role Big Tech should play in governing speech.
That’s when I knew that it was time to write about content moderation’s less-debated harms and to document them in a book.
Setting out to write about a subject I know so intimately (and have even experienced firsthand), I thought I knew what I would say. But the process turned out to be a learning experience that caused me to rethink some of my own assumptions about the right way forward.
One of the final interviews I conducted for the book was with Dave Willner, one of the early policy architects at Facebook. Sitting at a café in San Francisco just a few months before the pandemic hit, he told me: “Social media empowers previously marginal people, and some of those previously marginal people are trans teenagers and some are neo-Nazis. The empowerment sense is the same, and some of it we think is good and some of it we think is not good. The coming together of people with rare problems or views is agnostic.”
That framing guided me in the final months of writing. My instinct, based on those early experiences with social media as a democratizing force, has always been to think about the unintended consequences of any policy for the world’s most vulnerable users, and it is that lens that guides my passion for protecting free expression. But I also see now that it is imperative never to forget a crucial fact—that the very same tools which have empowered historically marginalized communities can also enable their oppressors.
[post_title] => Between Nazis and democracy activists: social media and the free speech dilemma
[post_excerpt] => The content moderation policies employed by social media platforms disproportionately affect marginalized communities and exacerbate power imbalances. Offline repression is replicated online.
[post_status] => publish
[comment_status] => closed
[ping_status] => open
[post_password] =>
[post_name] => between-nazis-and-democracy-activists-social-media-and-the-free-speech-dilemma
[to_ping] =>
[pinged] =>
[post_modified] => 2024-08-28 21:15:13
[post_modified_gmt] => 2024-08-28 21:15:13
[post_content_filtered] =>
[post_parent] => 0
[guid] => https://conversationalist.org/?p=2452
[menu_order] => 216
[post_type] => post
[post_mime_type] =>
[comment_count] => 0
[filter] => raw
)





