fbpx
Frances Haugen gives evidence to the UK parliament on Monday, October 25, 2021.

The world’s most famous Facebook whistleblower should amplify those who came before her

Frances Haugen’s policy proposals are modest at best, amounting to little more than what Facebook has already proposed or supported. 

In the summer of 2014, the kidnapping of three Israeli teenagers in the West Bank by Hamas-afilliated Palestinians sparked a seven-week sustained Israeli military assault on Gaza, with the military wing of the Islamist organization simultaneously launching rockets into Israel. By the time a ceasefire was implemented, around 2,200 Palestinians were dead and more than 10,000 wounded, the vast majority of them civilians. On the Israeli side, 67 soldiers and six civilians were killed. The physical destruction in Gaza was immense, with entire neighbourhoods reduced to rubble. It was one of the deadliest conflicts in the region’s history.

Social media—which was by then a popular tool for activism used by both Palestinians and Israelis (as well as the Israeli state)—played a significant role in the conflict. Israelis used social media to draw attention to the kidnapping and murder of the three boys and to the fear wrought by Hamas’s rockets, while Palestinians sought to draw the world’s attention to the Israeli military’s use of immense force against civilians. Everyone used memes, hashtags, and videos to amplify their messaging.

Facebook, which was a key tool for the activists who organized the uprisings that rocked Tunisia and Egypt in 2010-12, was still a young platform. It had instituted its first community standards only three years prior. Now it was a key site for online conflict.

That summer, concerned Palestinian activists brought a Facebook page to my attention. It featured a sniper’s target, with the title, in Hebrew:  “Kidnapped: Until the boys come back, we shoot a terrorist every hour.” The page had been created by Israelis who advocated vigilante justice; they posted the photographs and names of various Palestinian political prisoners, calling for them to be shot in retribution for the killing of the three Israeli boys who had been abducted.  

There is no question that page was inciting for retributive violence; language in the ‘about’ section read: “We must use a strong hand to fight violent and life-threatening terror. The weakness shown by the Israeli Government, which released thousands of murderers has only increased their drive and led to the kidnapping of the teens. The only way to bring the teens back is to instill fear in our enemies and make them understand that they will suffer. We support executing a terrorist every hour until the teens are released.”

In Israel, killing Palestinians as revenge for an unconnected incident is known colloquially as a “price tag” killing; the US State Department has condemned the act as terrorism. The Facebook page objectively called for murder, which violated one of the precepts of the platform’s community standards: “Safety is Facebook’s top priority. We remove content and may escalate to law enforcement when we perceive a genuine risk of physical harm, or a direct threat to public safety. You may not credibly threaten others, or organize acts of real-world violence.”

But the company refused to delete the page, overriding multiple reports from users. One Facebook policy staffer defended the decision by saying that the page administrators were calling for violence against terrorists, as though branding a person a terrorist justified advocating their extra-judicial murder. The page objectively violated Facebook’s own policy, but the company refused to admit it. Monika Bickert, who was then Facebook’s head of Global Policy Management, asserted in an interview that the page did not violate the company’s policy against hate speech.

This incident encapsulates Facebook’s policies in dealing with content across the Middle East and North Africa, for nearly a decade. In my book, Silicon Values: The Future of Free Speech Under Surveillance Capitalism, I describe several occasions on which Facebook either failed to act against threats, or acted in bad faith—disappearing valuable content that served as documentation of history.

In another egregious example of acting in bad faith, Facebook removed Egypt’s leading dissident page just a few months before the 2011 revolution. “We Are All Khaled Said,” named for a young man beaten to death by Alexandria police in 2010, had hundreds of thousands of followers. Ultimately, the organizers of the page put out a call for mass protests on January 25, 2011. The Tahrir Uprising, named for Cairo’s central square, lasted 18 days; it ended with the fall of the Mubarak regime. 

This is why global civil society activists were unsurprised at the revelations in the internal documents that Facebook whistleblower Frances Haugen released, particularly those that detailed the company’s abject failures in moderating content in the region. While American news outlets expressed shock at these stories, civil society organizations like 7amleh, the Palestinian civil society NGO that focuses on human rights in digital spaces, saw confirmation of what they had been reporting for years

Frances Haugen took a risk in releasing the documents, which provided important receipts for more than a decade of accusations against Facebook. But her policy proposals are modest at best, amounting to little more than what Facebook has already proposed or supported: She advocates the important intermediary liability proposals contained within Section 230, the law often dubbed “the most important law for online speech,” which protects companies from liability for what they choose to host (or remove). She has also spoken out against breaking up the increasingly monopolistic company, and told the French National Assembly that interoperability—allowing new services to “plug in” to existing, dominant ones, which is a core tenet of civil society proposals—won’t make a difference toward fixing our current conditions.  

In fact, all of these things—intermediary liability protections, competition, interoperability, as well as other fundamental concepts like transparency and accountability—are vital to a free and open internet. While companies can and should moderate content, proposals to reform Section 230 are not only likely to be unconstitutional; they also open up space for frivolous lawsuits against US companies, which are protected by the First Amendment for what content they choose to host (or not host). Interoperability would give users far more choice over how and what platforms they use, by enabling them not only to modify the services they use and communicate across services more easily, but also potentially enabling different models for content moderation. And if we want a landscape where people have more choice over where they interact, access information, and express themselves, competition is a key component of any reform. These solutions are not a panacea, nor a substitute for more holistic societal fixes, but they’re important pieces of the puzzle.

Meanwhile, media outlets outside the US and Europe are still struggling to obtain access to the Facebook company documents that Haugen leaked, so that they can report, with cultural competence and local knowledge, on the company’s shortcomings in a number of regions. In addition, Haugen’s publicity tour in the United States and Europe has prioritized talking to lawmakers rather than listening to potential allies. Many of those lawmakers ignored the demands of civil society experts, a notable number of whom are women of color; but they are willing to give their full attention to a former Facebook employee who is white and has a Harvard MBA.

Haugen isn’t entirely wrong: She understands that platforms need to be more transparent about how they create their policies and moderate content, as well as who is doing that moderation, and what sort of cultural and linguistic competencies those individuals have. Civil society actors, particularly those from the global south, have repeatedly emphasized the need for local expertise in content moderation—that is, the hiring of moderators with linguistic and cultural knowledge to tackle difficult speech issues and ensure that truly harmful content, such as incitement, doesn’t flourish while also making sure that content isn’t wrongfully removed. Here, her suggestions echo those of global civil society, although she has not given credit or consulted with those who have been making the same proposals for many years.

What Frances Haugen should have done—and still could do—is consult with the civil society experts, the activists and academics who have spent years studying and critiquing her former employer from the outside, painstakingly documenting its faults, and agitating for change. She needs to refocus her priorities to ensure that documents are made accessible to journalists around the world who have the lived experience and deep expertise to analyze them properly. Instead of assuming she has all the answers, she should be using her significant power to call for Facebook—and lawmakers—to bring them to the table.