How can we stop social media from manipulating our emotions?

We can’t seem to quit social media, even though we know it’s not good for us. Is there a way to take back control of the user experience? 

Credit: Kayla Velasquez / Unsplash

The good news is that we now know, thanks to investigative journalism, that bad faith actors are using social media to manipulate our emotions and, by extension, our political domain. The bad news is that despite rising awareness, nothing has changed. Facebook is still manipulating its algorithms so that we all live in our own information bubbles. Twitter is still full of fake accounts, often called bots, that dupe even sophisticated users —  like prominent journalists or well-known politicians — into sharing information that simply is not true. 

As Robert Mueller said while testifying to Congress last month, social media manipulators working for Russian intelligence continue to interfere in U.S. politics “right now.”

An addiction to social media goes well beyond craving the dopamine hit supplied by seeing one’s Tweet shared widely, or one’s Facebook post liked many times. These days, journalists need Twitter to follow the news and promote their own work, while Facebook has become an all-but essential tool for staying abreast of cultural events and keeping in touch with friends and family. But while we’re “liking” photos of our friends’ new babies and sharing important investigative journalism via Twitter, we are also inadvertently exposing ourselves to people whose job it is to manipulate our thoughts and emotions. And they are experts.

Now scholars and journalists are warning that YouTube has become a terribly dangerous radicalizing tool. Zeynep Tufekci, an expert in the sociology of technology, warned about YouTube last year in a column for The New York Times. Almost by accident, she writes, she discovered that the video platform was algorithmically programmed to direct users toward opinions more radical than the ones they seemed to hold. If a user searched for a Bernie Sanders video, for example, YouTube might recommend an Atifa video. On the other hand, search for a video by a mainstream conservative commentator and next thing you know the algorithm is suggesting videos by white nationalists. YouTube, concluded Tufekci, “[might be] one of the most powerful radicalizing instruments of the 21st century.”

One year later, The New York Times published an investigative story that shows how bad faith actors manipulated YouTube videos in order to radicalize Brazilian society by upending long-held social norms. Teachers quoted in the article say, for example, that their students disrupted classes to quote conspiracy theories they had seen on YouTube videos. Meanwhile Bolsanoro staffers were uploading videos that propagated conspiracy theories about teachers manipulating their students to support communism. The result: voters chose Jair Bolsanor, the far right newly elected president of Brazil. Danah Boyd, the founder of Data & Society, told The New York Times that the YouTube-influenced results of Brazil’s elections are “a worrying indication of the platform’s growing impact on democracies worldwide.”

Similarly, Britain saw its democracy undermined in 2016 when bad actors who funded and led the Brexit campaign used Facebook to manipulate British public opinion. The result: a slight majority of Britons voted in favor of leaving the European Union.  

Read more about Brexit: How less-than-great men brought Britain to its worst hour

But given that few Britons had expressed any interest in the EU prior to the referendum, how did this result come about? We now know, as The Guardian’s Carole Cadwalladr reported in a bombshell investigative piece, that British public opinion had been manipulated by misinformation published on Facebook accounts set up by a now-notorious (but then unknown) company called Cambridge Analytica. The same company later acknowledged the role it had played in manipulating public opinion in the United States prior to the 2016 presidential election. 

Craig Silverman, the Canadian BuzzFeed journalist who coined the term “fake news” in 2015, warned the CBC that Canadians are not immune from the disease of social media manipulation, either. Facebook, he told the CBC, is publishing anti-Trudeau propaganda as well as attacks on members of Trudeau’s government who are people of color. Silverman added that “…people acting outside of Canada publishing, in some cases, completely false or unsupported stories that are having an effect on what Canadians think about the current government and politics in Canada in general.”

How are we to remain connected and informed and still deal with the crisis of disinformation? 

Taylor Owen, a prominent digital media scholar who holds the Beaverbrook Chair in Media, Ethics and Communications at McGill University, suggests that some self-awareness would help. We must stop and think carefully before responding to news and opinion that makes us feel an emotion, whether it be satisfaction or anger. “When people are supplied with a wide variety of information that confirms their biases,” he says, they are less willing to accept opinions that contradict them.

But journalists also have an important role to play, he says in this interview. According to Owen’s newly published research, people who consume a great deal of news are not better informed. The reason: they tend to consume and retain information that confirms their biases. The media, suggests Owen, would be doing a public service by reporting deeply on issues for which there is bipartisan agreement. In Canada, interestingly, one of those issues is the environment. 

Lisa Goldman

Lisa Goldman is the editor-in-chief of The Conversationalist. She lives in Montreal, where the winter is long and cold. Follow her on Twitter @lisang