Skip to main content

Is Facebook really fact-checking itself? We don't know

#32 of 84 articles from the Special Report: Democracy and Integrity Reporting Project
Facebook login screen displayed on mobile phone. Photo: CC0.

Support strong Canadian climate journalism for 2025

Help us raise $150,000 by December 31. Can we count on your support?
Goal: $150k
$32k

Facebook still isn't doing enough to combat the spread of misinformation and disinformation on its platform, according to a new transparency report released by the U.K.-based fact-checking charity organization Full Fact.

The 46-page report offers the first assessment of Full Fact's involvement in the social media company's fact-checking project, which was launched in December 2016 following widespread criticism of Facebook's failure to address fake news on the platform ahead of the U.S. presidential election.

Since its inception, the program has expanded to include more than 50 fact-checking partners working in 42 languages.

"Currently, the only sense we have of how many people our fact-checks are reaching comes from data on visits to our own website," Full Fact said in the report.

Facebook Canada announced it was joining the initiative in June 2018, along with Agence France-Press (AFP) as its fact-checking partner, to vet Canada-focused news on its platform. The partnership will extend at least through the October 2019 federal election.

In other words, after six months of fact-checking, Facebook doesn't even know if its efforts are effective. 

In May, the project fact-checked a quote by Theodore Roosevelt that was falsely attributed to Canada’s seventh prime minister, Wilfrid Laurier. The misattributed quote, which called for immigrants to assimilate, had been shared more than 250,000 times on Facebook by the time AFP reviewed it.

In January 2019, Full Fact joined the Facebook partnership, with the goal of combating misinformation and disinformation in the U.K.

Here's how it works: fact-checkers are provided with a “queue” of content — including text-based posts, images, links and videos — that Facebook has deemed “possibly false.” Items are then prioritized based on factors such as how rapidly they’re being spread and whether the information could cause harm to readers — for example, by promoting dangerous health claims or advice.

Fact-checkers then evaluate the content and score it based on nine possible ratings: False, Mixture, False Headline, True, Not Eligible, Satire, Opinion, Prank Generator and Not Rated. At that point, Facebook may take action to warn users about false information and slow the spread of dangerous claims.

Most claims couldn't be fact-checked

From January through June, Full Fact’s team published 96 fact-checks, of which 59 were deemed to be false, 19 were a mix of true and false, seven were categorized as opinion, six were categorized as satire and five were rated as true.

But according to the report, most of the content provided by Facebook for review couldn’t actually be fact-checked. Many of the items were statements of opinion or otherwise non-factual claims, the report said, while other posts didn't clearly fit into one of the nine rating categories.

Fact-checkers also struggled to evaluate posts that contained both true and false claims, prompting calls for clearer guidelines and a more practical rating system.

"For a number of the posts we fact-checked, we found the existing rating system to be ill-suited," Full Fact reported. To help address the problem, Full Fact recommended changing the rating system to allow fact-checkers to apply multiple ratings to the same item.

The report also called on Facebook to add new rating categories such as “unsubstantiated,” which would be applied when fact-checkers "cannot definitively say something is false, but equally can find no evidence that it is correct." In some instances, a "more context needed” rating may be necessary, as well — for example, when technically accurate information is presented without crucial context, such as listing medication side effects without mentioning that the risk of experiencing them is very low, as seen in this example:

This post lists potential side effects of one brand of contraceptive pill. Most of them are accurate, in the sense that they are listed as potential side effects, but it could well be interpreted in ways that overstate the risk. We rated it as “true,” as we did not feel it was inaccurate enough to justify even a “mixture” rating; however, we believe that a “more context” rating would have been more appropriate.

The report also noted that it is difficult to fact-check humorous content and recommended that a broader category be added to account for "non-serious, lighthearted" humour that doesn't fall under the categories of "satire" or "prank." Such a rating would not reduce the reach of the post, unlike those deemed false or partially false.

"It is not our job to judge the quality of people’s senses of humour," Full Fact added.

However, adding more approved categories under the banner of "humour" could come with unintended consequences. As noted in an independent audit of Facebook released in July, hate groups and extremists often hide offensive and hateful content behind the veil of “humour.”

Currently, Facebook's hate-speech policy has an exception for humour, but the audit team recommended that the exception be removed. It's not clear how Facebook plans to balance the recommendations from these two reports.

Facebook responded to the new report in a statement, saying: “Our third-party fact-checking program is an important part of our multi-pronged approach to fighting misinformation. We welcome feedback that draws on the experiences and first-hand knowledge of organizations like Full Fact, which has become a valued partner in the U.K."

The social-media company said it's already pursuing many of the recommendations in the report "as part of continued dialogue with our partners," but acknowledged that "there’s always room to improve."

Health claims are common but hard to fact-check

At least 18 items checked by Full Fact were health-related claims, ranging from the most common — vaccine-related items — to whether coughing can help someone who thinks they're having a heart attack (the answer is no).

However, Full Fact noted that it struggled to evaluate many of these claims, in part because fact-checking health information "often require[s] specific expertise" that was not readily available to Full Fact.

"We have often found it difficult to get answers on these health claims," the report noted, describing one case where the fact-checkers were "bounced between" 13 press offices during their search for information.

Full Fact also expressed similar concerns in other subject areas, writing:

We are concerned that we are finding areas where it is hard to find sources of impartial and authoritative expert advice, especially from organizations that are capable of responding in time to be relevant to modern online public debate.

As a result of these delays, the organization said it's worried that a lot of potentially harmful content may be going unchecked.

Evidence suggests their concerns are valid. According to an April 2019 study conducted by Health Feedback, a bipartisan network of scientists, along with the Credibility Coalition, only three of the top 10 shared health articles of 2018 were deemed credible. The rest were either misleading or included some level of false information.

We still don't even know if fact-checking is working — and Facebook isn't providing the data to find out

According to the report, Facebook did not provide sufficient information to allow Full Fact to evaluate the impact of its fact-checking on the spread or visibility of false claims on Facebook.

In other words, after six months of fact-checking, the organization doesn't even know if its efforts are effective.

As Full Fact notes, “there are multiple different ways in which fact-checking can be beneficial,” including slowing the spread of false information, correcting misconceptions based on false information that has already spread, helping people understand the broader context or reducing the incentive for people to spread false information in the future. The report recommends studying all of these potential pathways of action.

"We want Facebook to share more data with fact-checkers, so that we can better evaluate content we are checking and evaluate our impact," the report said.

The data to evaluate such efforts need not be complicated. For example, Facebook could share statistics on the frequency with which people click on and share links before and after being flagged as false.

This is hardly the first time Facebook's fact-checking program has come under fire for lack of transparency. In February, Snopes quit the partnership, saying the social-media company refused to release data about the impact of its work. In October 2018, ABC News also dropped out.

Expand fact-checking to Instagram

In May, Facebook rolled out an image-detection tool to begin fact-checking Instagram posts. (Facebook owns Instagram.) The tool helps identify images and memes that may be false, and sends them to the same dashboard used by Facebook's fact-checking partners. If flagged as false after review, the posts stop appearing in Instagram’s Explore tab and hashtag search results, but still appear on Instagram pages for users who follow the accounts that posted them.

Now, Full Fact is calling on the company to expand the fact-checking program to Instagram to let fact-checkers directly evaluate the quality of content on the visual-based platform.

“The potential to prevent harm is high here, particularly with the widespread existence of health misinformation on the platform,” the report said. “Facebook has already taken some steps toward using the results of the program to influence content on Instagram, or Instagram images that are shared to Facebook.”

But when it comes to “directly checking content” on the image-based social media platform, it’s “not yet a part of the program.”

Facebook should also keep working on developing better tools to identify false content, including posts that are going viral, Full Fact said. In particular, Facebook should focus on identifying false content that has gone viral in the past, as well as false posts that are being shared with slight changes in wording and layout. Manually reviewing and identifying these variations in content is time-consuming and imprecise, the report said, which makes it a prime candidate for machine learning.

As National Observer has reported, dubious "news sites" often rebrand as new sites, making them harder to track.

Full Fact's transparency report is not the first time Facebook's fact-checking program has been called into question. An initial review of the program conducted by the Guardian found that the fact-checks were largely ineffective and that flagging content as false wasn't having its intended effect. Similarly, a 2017 study by Yale University researchers found that tagging posts as false could lead to a phenomenon called the Implied Truth Effect, whereby the presence of such flags makes people more likely to believe that any unflagged content is true ⁠— even if it hasn't been fact-checked.

Full Fact said it would like to play a role in generating data to develop new technology to detect and mitigate harmful information but that Facebook isn't sharing its plans with its partners, according to the organization. "[A]t the moment we know too little about plans for using that data," it concluded.

Comments