Why ISIS is easier for big tech to fight than white supremacy

Why ISIS is easier for big tech to fight than white supremacy

Social media networks are facing a reckoning for their role in spreading far-right terrorist propaganda, after a deadly attack on a New Zealand mosque was live-streamed on Facebook.

Recognizing that the gunman’s gruesome stunt was designed to go viral, governments and business leaders are now calling on Facebook, Twitter and Google to do more to rid their platforms of hate speech that could encourage violence.

Most social platforms have taken aggressive action to tackle another extremist group in recent years: ISIS. But their response to white supremacists has been slower.

Experts say that’s because far-right extremism online isn’t as easy to detect, citing the movement’s innate political nature and culture of ambiguous in-jokes. The messages spread online by white supremacists are commonly compared by researchers and counter-terrorism experts to those propagated by the Islamic extremist terror group, for their shared tactic of online radicalization.

“I don’t think (social media companies) have the capabilities, even at a basic level, when white supremacist content is flagged to act on it,” Joshua Fisher-Birch, a content review specialist at the Counter Extremism Project, a US-based nonprofit, told CNN, adding that he doubted those companies were taking white supremacist content seriously enough.

Fisher-Birch, who monitors the proliferation of extremist content online, said that while social platforms and video streaming services have been vigilant in suspending accounts that share ISIS content, white supremacist messaging has gone unfettered.

White nationalism online is outpacing ISIS propaganda

Amid a social network crack down on ISIS propaganda, US white nationalist movements have thrived, with followers growing by more than 600% on Twitter since 2012, according to a 2016 study by the Program on Extremism at George Washington University. In fact, growth in white nationalist and Nazi accounts on Twitter outpaced ISIS by almost every metric, in part because they faced less pressure from suspensions.

J.M. Berger, the study’s author, said that the findings “suggest that the battle against ISIS on social media is only the first of many challenges to mainstream, normative values,” adding that “other extremist groups” have been able to learn from ISIS’ successes and mistakes to develop their own digital strategies.

That strategy was on full display in the staging of the New Zealand attack. On Twitter and 8chan, an online message board that allows unconstrained speech including racist and extremist posts, the shooter teased his imminent attack with a call to spread his message through memes and “shitposting,” slang for pushing out a huge volume of low-quality, ironic content to illicit a reaction.

The response, according to Google, was unprecedented: In the first 24 hours after the video was streamed live on its platform, Facebook said it removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload.

The flurry came after Facebook’s systems failed to catch the initial video, which was viewed live fewer than 200 times before being unleashed across the internet. The company said it took it down only after being alerted by New Zealand police.

Facebook’s failure to catch the initial livestream comes amid repeated pledges to step up moderation of its platforms. The company has recently hired tens of thousands of content moderators. Google’s video streaming service YouTube has also been investing in human moderators to clean up offensive videos on its service.

Google said that the volume of related videos shared on YouTube in the same period was, at times, as fast as “a new upload every second.” Google said that it removed tens of thousands of videos and terminated hundreds of accounts.

Twitter said it was “continuously monitoring and removing any content that depicts the tragedy.”

The New Zealand attack appeared to be one of the first times that far-right terrorism was treated by social networks with the same wholesale erasure as pro-ISIS material. But the speed at which copies of the video spread online reflects the scale of the challenge facing social networks.

Why is white supremacist content so hard to stop?

In 2016, under pressure from lawmakers in the United Kingdom, Europe and the United States, YouTube, Facebook, Microsoft and Twitter launched a shared industry database of “hashes” — digital fingerprints of extremist imagery. Its aim is to flag ISIS propaganda for removal and, in some cases, block it from ever being posted.

Since its peak in 2014, when ISIS’ media apparatus spammed social media with grisly videos, the group’s online presence has shrunk dramatically, the George Washington University report points out.

After the New Zealand attack, Facebook said it had shared hashes of more than 800 visually-distinct related videos via its collective database.

But Julia Ebner, a fellow at the London-based Institute for Strategic Dialogue, who specializes in Islamist and far-right extremism, tells CNN that this strategy has not been deployed as aggressively by Facebook and other Big Tech firms to counter far-right extremist content in the past.

That’s partly because white nationalist messaging is not easily recognizable, said Ebner. It is often veiled in memes, allusions and insider jokes.

“Alt-right extremist networks are very good at circumventing existing legislation (policies and practices on platforms). They use irony and sarcasm to spread hateful ideas and hide them behind memes,” said Ebner.

“The alt-right has tapped into this in a way that no jihadist movement has.”

In contrast, ISIS content is easier to identify because it tends to repeat words and phrases that don’t appear in other content, said Pedro Domingos, a professor of computer science at the University of Washington and author of The Master Algorithm.

The formulaic way that ISIS disseminated propaganda — using hashtags in different languages, coded speech, watermarked videos and iconography — also allowed review teams to easily create hash copies for blacklist.

Domingos said that digital platforms don’t yet have the technical ability to combat white supremacist accounts in the same way they’ve tackled ISIS.

“The main problem is that the (far-right extremist) content is too variable and multifarious to be reliably distinguished from acceptable content by the filtering algorithms that tech companies use, even state-of-the-art ones,” Domingos said in an email to CNN.

As part of the current debate, technology experts are calling for platforms like Facebook to make the inner workings of their algorithms transparent, giving users more clarity around the content they see and why they see it. Without that transparency, it’s easy for users to think that algorithms are optimized for truth, rather than providing a pipeline of related content to keep people on platforms.

But algorithmic transparency could also backfire, said Domingos, making it easier for white supremacists to game them.

It’s not just a technological problem

Even if a more advanced algorithmic capability existed — one that would stop the spread of nuanced hate speech — tech companies would ultimately need to be committed to deploying it. This requires them to accept more responsibility as publishers: moderating and removing content long before it leads to violence or bullying.

It wasn’t until a 2017 white supremacist rally turned deadly in Charlottesville, Virginia, that most major social media companies, domain registrars and payment platforms looked at addressing domestic extremism.

Several white nationalist accounts were banned on various platforms, including one of the most influential neo-Nazi sites online, the Daily Stormer.

Asked how Facebook is tackling far-right extremist content shared on its platform, a spokesperson told CNN that it is investing in security teams and technical tools to “proactively detect hate speech” and is working with partners to better understand hate organizations as they evolve.

“We ban these organizations and individuals from our platforms and also remove all praise and support when we become aware of it,” the spokesperson said.

A YouTube spokesperson told CNN that it had “heavily invested in human review teams and smart technology” in order to “quickly detect, review, and remove” extremist content. “Hate speech and content that promotes violence have no place on YouTube,” the spokesperson said.

A spokesperson for Twitter said that the company’s policy against hate speech “prohibits behavior that targets individuals based on protected categories including race, ethnicity, national origin or religious affiliation.”

“Where we identify content that breaks these rules, we take aggressive enforcement action,” the spokesperson told CNN.

Should social networks police political rhetoric?

Joan Donovan, director of the Technology and Social Change Research Project at Harvard University’s Shorenstein Center, points out another reason why US tech companies have been slower to react to white supremacist content than to ISIS-related content: Politicians around the world are using the same rhetoric.

“The culture of Silicon Valley itself isn’t designed to look inward,” Donovan told CNN. “So when we’re talking about white supremacy, people on platforms remark that they don’t know how to spot it because it’s sometimes ambiguous, sometimes coded, or sometimes reflects politicians’ language.”

The overlap between individuals who promote white supremacist ideology online and populist political parties around the globe poses a problem for platforms, in a way that combating ISIS did not.

In 2015, during the US presidential campaign, Donald Trump himself was accused of hate speech after he posted a statement from his website on Facebook calling for a blanket Muslim travel ban, which was later removed. He has also been criticized for posting a racially charged campaign ad and for seeming to allude to anti-Semitic conspiracy theories in his tweets. Trump has said he is not a racist and has condemned neo-Nazis.

White nationalist Twitter accounts reference Trump and Trump-related hashtags like #makeamericagreatagain more than almost any other topic except for #whitegenocide, according to the same George Washington University study, which underscores the obstacles in routing out offending accounts.

Tackling such polarizing rhetoric by influential figures needs to be part of any long-term digital strategy, said Ebner.

“One of the most alarming aspects of all of this is the mainstreaming of dehumanizing language and black and white narratives by politicians like Trump, (Italian Deputy Prime Minister Matteo) Salvini, or the Australian senator,” said Ebner.

Shortly after the New Zealand mass shooting, Australian Senator Fraser Anning blamed the violence on Muslim immigration in a post on Twitter.

Asked whether attacks like the one in New Zealand could happen in Italy because of his aggressive, anti-immigrant rhetoric, Salvini said: “The only extremism that deserves attention is the Islamic one.”

According to Ebner, to confront divisive rhetoric like this we must focus on building strong civil society efforts and promote education around digital citizenship. “We don’t want to clamp down on freedom of speech, but we need to challenge this kind of rhetoric … and that’s a longer term investment.”

 

Tags:

social media ISIS white supremacy

Want to send us a story? Submit on Wananchi Reporting on the Citizen Digital App or Send an email to wananchi@royalmedia.co.ke or Send an SMS to 25170 or WhatsApp on 0743570000

Leave a Comment

Comments

No comments yet.

latest stories