Skip to main content

Tech companies promise action after Christchurch massacre

#4 of 84 articles from the Special Report: Democracy and Integrity Reporting Project
French Prime Minister Emmanuel Macron greets Canadian Prime Minister Justin Trudeau at the Christchurch call to action summit at the Elysée Palace on May 15, 2019. Photo by Cecilia Keating

Global tech giants have pledged to clamp down on terrorist and extremist content on their platforms, again.

On Wednesday afternoon in Paris, Facebook, Microsoft, Twitter, Google and Amazon signed up to the Christchurch Call to Action, a proposal spearheaded by New Zealand Prime Minister Jacinda Ardern after the livestreamed shooting that killed 51 worshippers praying in two mosques in March.

The video of the atrocity proliferated online; more that 1.5 million copies had to be subsequently removed from Facebook.

In a joint statement issued on Wednesday, the technology companies said: “The terrorist attacks in Christchurch, New Zealand in March were a horrifying tragedy. And so it is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence."

This is why they are "sharing concrete steps we will take that address the abuse of technology to spread terrorist content, including continued investment in technology that improves our capability to detect and remove this content from our services, updates to our individual terms of use, and more transparency for content policies and removals,” the companies, mostly headquartered in California's Silicon Valley, said.

The world's biggest technology companies have promised to work harder to stamp out hate speech on their platforms at a summit in Paris called after a massacre at two mosques in New Zealand was livestreamed and spread widely online.

The companies representatives were joined in Paris by world leaders attending the summit seeking to address online hate and its violent manifestations.

Justin Trudeau speaks to reporters outside of Notre Dame in Paris on 15 May, 2019, flanked by French Culture Minister Franck Riester. Photo by Cecilia Keating

​​​​​​The online content providers have published a nine-point plan that details how they will combat the proliferation of extremism and violence on their channels. This includes updating their respective terms of use, community standards, codes of conduct and acceptable use policies in order to “expressly prohibit the distribution of terrorist and violent extremist content." They have also pledged to enforce greater checks on livestreaming services.

One issue with this approach is that far-right extremists appear able to rebrand their content quicker than Facebook can find it and take it down, as National Observer reported last week.

Both Paul Joseph Watson and Infowars, the controversial site he edits which is a major hub for conspiracy theories, disinformation, and Islamophobic content, were banned from Facebook earlier this month, but Watson was circumventing the ban by repackaging his content under a different brand name and posting it on the platform.

On Tuesday, Facebook tightened its rules on its live video service, banning users for a period of time after the first violation of company policy anywhere on the platform.

When questioned why these users wouldn’t be banned from Facebook altogether, a spokesperson from the company told National Observer that accounts that break community standards were already liable to account restrictions of a varying scale.

The companies have also committed to continuing to invest in technology, for example digital fingerprinting and artificial intelligence-based technology that can detect violent and extremist online content. They have also committed to publishing regular reports on their efforts to detect and remove abusive and violent content.

Facebook said in May last year that it struggled to stamp out hate speech because the computer algorithms it uses to track it down still require human assistance to judge context.

The technology heavyweights also said in their joint statement that they will work together to develop technology and build crisis protocols for urgent events, work together to educate the public on how to report online hate, and collectively support research that looks at the impact of online hate.

Where were Mark and Sheryl?

Critics noted that Facebook's most senior operatives, founder and CEO Mark Zuckerberg and chief operating officer Sheryl Sandberg, were absent from talks on Wednesday. The company’s chief lobbyist, Nick Clegg, attended.

When questioned by National Observer about why Zuckerberg and Sandberg would be absent, a spokesperson for Facebook said Clegg was the best person to lead on the issue, given his experience working in government and with industry bodies on countering violent extremism.

Nick Clegg was deputy prime minister in the United Kingdom's 2010 to 2015 Conservative-Liberal Democrat coalition government.

“We share the commitment of world leaders to curb the spread of terrorism and extremism online," he said in a statement. "In my time in government, I witnessed first-hand the tragic impact of terrorism and violent extremism on our communities. These are complex issues and we are committed to working with world leaders, governments, industry and safety experts at next week’s meeting and beyond on a clear framework of rules to help keep people safe from harm.”

Comments