Support strong Canadian climate journalism for 2025
Facebook will ban white nationalism and separatism from its social media platforms starting next week, saying that it recognizes that the beliefs can't meaningfully be separated from white supremacy and organized hate groups, the technology company said on Wednesday.
"Today we’re announcing a ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram, which we’ll start enforcing next week," it said in a statement entitled 'Standing Against Hate.' "It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services."
The company said those searching on Facebook and Instagram for terms related to white supremacy would be redirected to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach.
Facebook has previously said it struggles to stamp out hate speech because the computer algorithms it uses to track it down still require human assistance to judge context.
In its latest statement, the company said it had long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion but didn’t originally apply the same rationale to expressions of white nationalism and separatism because "we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity."
Facebook has faced a storm of criticism in recent years for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the U.S. presidential election and the Brexit vote to leave the European Union, both in 2016. It has made a series of changes including new policies limiting political advertisements.
While Facebook (and Twitter) have often faced the most intense scrutiny of the various social media platform, an article published in the Atlantic suggests that Instragram is the Internet's new home for hate.
Facebook also came under fire following the terrorist attack in New Zealand after the suspect appeared to share a live video of the attack as it was carried out.
In a lengthy statement posted on March 20, Facebook admitted that its artificial intelligence software failed to flag the video and that it only removed it after being informed by police. It has also said it is reviewing procedures regarding how it treats user reports about content, while encouraging users to report content that they find disturbing.
"During the entire live broadcast, we did not get a single user report," the company said. "This matters because reports we get while a video is live are prioritized for accelerated review. We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground."
The company also blamed a coordinated effort by "bad actors" to spread the video online, noting that it removed more than 1.2 million videos of the attack at upload, within 24 hours of the tragedy. It said that it also removed about 300,000 copies of the attack after they were posted on the platform.
"What happened in New Zealand was horrific. Our hearts are with the victims, families and communities affected by this horrible attack," the company said. "We'll continue to provide updates as we learn more."
Comments