Once hailed as the great democratizers, social media platforms are now under fire for failing to moderate hate speech.
On June 6, 2020 I participated in Berlin’s Black Lives Matter demonstration. Thousands of people turned out, despite the pandemic, in solidarity with those who were demonstrating across the United States to protest the police killing of George Floyd—and to protest police killings of people of color in Germany. The mass gathering in the middle of the city’s historic Alexanderplatz was a powerful sight; standing there, wearing my mask and face shield, I felt for a moment as though things might change.
Exactly 10 years earlier and halfway around the world, another act of horrific police brutality occurred and changed the course of history. Khaled Saeed, a 28-year-old Egyptian man who lived in Alexandria, was sitting in a cybercafé when plainclothes police officers barged in and demanded to see everyone’s identification. Saeed refused. In response the officers, who almost never encountered defiance from the cowed citizens of the authoritarian state, began to beat him. They dragged him outside, continuing to batter him in full view of numerous witnesses. At one point, Saeed cried out, “I’m dying!” to which an officer responded: “I’m not leaving you until you are dead.” They drove off with Saeed’s lifeless body and returned 10 minutes later to dump it at the same place they had attacked him.
I was finishing my book, Silicon Values: The Future of Free Speech Under Surveillance Capitalism on the day a teenage shop clerk in Minneapolis called 911 to report a customer he suspected of having passed him a counterfeit $20 bill. Derek Chauvin was one of the responding police officers who arrested George Floyd soon after. A bystander used her phone to record the shocking spectacle of Chauvin, a white police officer, kneeling on Floyd’s neck for nearly 10 minutes as he gasped for breath, begged for mercy, and ultimately died. The video of the incident sparked a global movement.
While writing my book I thought about the ties that bind us, across borders; our commonalities, our differences, and the ways in which powerful actors place limits on how we communicate, how we organize, and how we express ourselves.
The chapters covering the role that social media platforms had played in the Arab uprisings of 2010-2011 and in the Movement for Black Lives were done by the time the protests of 2020 erupted and I was working on the book’s conclusion, in which I wrote:
“Police brutality and repression in Egypt and the United States are inextricably linked, through global networks of power and capitalism and more directly through military aid and training, but also through the similar ways in which the powerful seek to quash dissent—which includes platform censorship.”
In Egypt, Saeed’s death inspired activists to create a Facebook page called “We are all Khaled Saeed,” which became a place where thousands of Egyptians participated in conversations and polls about the oppressive state, police violence and repression. Later, it was the place where activists called for the protests that led to the January 25 revolution—an uprising that inspired numerous movements throughout the region and the world and shaped the ensuing decade. But the Egyptian revolution might never have begun as it did if events had evolved differently.
During the decade prior to the 2011 uprising, Egypt saw a blogging boom, with people from diverse socio-economic backgrounds writing outspoken commentary about social and political issues, even though they ran the risk of arrest and imprisonment for criticizing the state. The internet provided space for discussions that had previously been restricted to private gatherings; it also enabled cross-national dialogue throughout the region, between bloggers who shared a common language. Public protests weren’t unheard of—in fact, as those I interviewed for the book argued, they had been building up slowly over time—but they were sporadic and lacked mass support.
While some bloggers and social media users chose to publish under their own names, others were justifiably concerned for their safety. And so, the creators of “We Are All Khaled Saeed” chose to manage the Facebook page using pseudonyms.
Facebook, however, has always had a policy that forbids the use of “fake names,” predicated on the misguided belief that people behave with more civility when using their “real” identity. Mark Zuckerberg famously claimed that having more than one identity represents a lack of integrity, thus demonstrating a profound lack of imagination and considerable ignorance. Not only had Zuckerberg never considered why a person of integrity who lived in an oppressive authoritarian state might fear revealing their identity, but he had clearly never explored the rich history of anonymous and pseudonymous publishing.
In November 2010, just before Egypt’s parliamentary elections and a planned anti-regime demonstration, Facebook, acting on a tip that its owners were using fake names, removed the “We are all Khaled Saeed” page.
At this point I had been writing and communicating for some time with Facebook staff about the problematic nature of the policy banning anonymous users. It was Thanksgiving weekend in the U.S., where I lived at the time, but a group of activists scrambled to contact Facebook to see if there was anything they could do. To their credit, the company offered a creative solution: If the Egyptian activists could find an administrator who was willing to use their real name, the page would be restored.
They did so, and the page went on to call for what became the January 25 revolution.
A few months later, I joined the Electronic Frontier Foundation and began to work full-time in advocacy, which gave my criticisms more weight and enabled me to communicate more directly with policymakers at various tech companies.
Three years later, while driving across the United States with my mother and writing a piece about social media and the Egyptian revolution, I turned on the hotel television one night and saw on the news that police in Ferguson, Missouri had shot an 18-year-old Black man, Michael Brown, sparking protests that drew a disproportionate militarized response.
The parallels between Egypt and the United States struck me even then, but only in 2016 did I become fully aware. That summer, a police officer in Minnesota pulled over 32-year-old Philando Castile—a Black man—at a traffic stop and, as he reached for his license and registration, fatally shot him five times at close range.
Castile’s partner, Diamond Reynolds, was in the passenger’s seat and had the presence of mind to whip out her phone in the immediate aftermath, streaming her exchange with the police officer on Facebook Live.
Almost immediately, Facebook removed the video. The company later restored it, citing a “technical glitch,” but the incident demonstrated the power that technology companies—accountable to no one but their shareholders and driven by profit motives—have over our expression.
The internet brought about a fundamental shift in the way we communicate and relate to one another, but its commercialization has laid bare the limits of existing systems of governance. In the years following these incidents, content moderation and the systems surrounding it became almost a singular obsession. I worked to document the experiences of social media users, collaborated with numerous individuals, and learned about the structural limitations to changing the system.
Over the years, my views on the relationship between free speech and tech have evolved. Once I believed that companies should play no role in governing our speech, but later I shifted to pragmatism, seeking ways to mitigate the harm of their decisions and enforce limits on their power.
But while the parameters of the problem and its potential solutions grew clearer, so did my thesis: Content moderation— specifically, the uneven enforcement of already-inconsistent policies—disproportionately impacts marginalized communities and exacerbates existing structural power balances. Offline repression is, as it turns out, replicated online.
The 2016 election of Donald Trump to the U.S. presidency brought the issue of content moderation to the fore; suddenly, the terms of the debate shifted. Conservatives in the United States claimed they were unjustly singled out by Big Tech and the media amplified those claims—much to my chagrin, since they were not borne out by data. At the same time, the rise of right-wing extremism, disinformation, and harassment—such as the spread of the QAnon conspiracy and wildly inaccurate information about vaccines—on social media led me to doubt some of my earlier conclusions about the role Big Tech should play in governing speech.
That’s when I knew that it was time to write about content moderation’s less-debated harms and to document them in a book.
Setting out to write about a subject I know so intimately (and have even experienced firsthand), I thought I knew what I would say. But the process turned out to be a learning experience that caused me to rethink some of my own assumptions about the right way forward.
One of the final interviews I conducted for the book was with Dave Willner, one of the early policy architects at Facebook. Sitting at a café in San Francisco just a few months before the pandemic hit, he told me: “Social media empowers previously marginal people, and some of those previously marginal people are trans teenagers and some are neo-Nazis. The empowerment sense is the same, and some of it we think is good and some of it we think is not good. The coming together of people with rare problems or views is agnostic.”
That framing guided me in the final months of writing. My instinct, based on those early experiences with social media as a democratizing force, has always been to think about the unintended consequences of any policy for the world’s most vulnerable users, and it is that lens that guides my passion for protecting free expression. But I also see now that it is imperative never to forget a crucial fact—that the very same tools which have empowered historically marginalized communities can also enable their oppressors.