Improving Fact-Checking by Improving Context-Checking 🔎

The Society Library
10 min readNov 29, 2021

--

We need to widen the lens and improve the rigor of this important role.

A small magnifying glass highlights the text “Fact #145924058,” a larger magnifying glass highlights that fact plus Fact#324. A larger magnifying glass encapsulates those plus additional Fact#9845, Fact#24958, and Fact#19873. The largest magnifying glass encapsulates all of those plus the word “Context” which repeats around the rim of the largest magnifying glass.
The Society Library is a 501(c)3 collective intelligence nonprofit organization.

In 1710, Jonathan Swift wrote:

“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect.”

— [present source, original source unconfirmed by the Society Library]

It seems like sensationalized and false news has always been perceived to travel fast, but now we live in an age where our human tendencies are amplified by technology. Fact-checkers, therefore, have a difficult and arguably impossible job. A paper in Science suggests that rumors diffuse much more quickly through social networks than “the truth” does (truth as determined by fact-checkers). According to the paper, “the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people.” If this is a trend in general, how could we expect fact-checkers keep up?

We would imagine that fact-checkers likely optimize by making the most impactful, viral, or politically relevant claims the highest priority. And it is for those claims that we think increased rigor of analysis and increased “context-checking” should be applied, even if it increases the distance to limp across the finish line to publication.

On Context and Context-Checking:

Facts and fact-checks exist in and out of various social, political, cultural, educational, community, and historical contexts. We’ll be working with this definition of context: “the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed.” In a recent article about “contextual density,” we wrote: “Context is carried around by audiences and assumed of the authors, and vice versa. How could we possibly expect one to have the contextual intelligence of the other?”

If fact-checkers want to be more effective with more disparate audiences, they may have to. So, what is “context checking?” Context-checking is a series of techniques to discover all of the different settings or circumstances to which a fact is relevant. It is a way to overcome the inherent biases of the fact-checker or the fact-checking process. These biases include research biases, cognitive biases, and logical fallacies that weaken the reputation and accuracy of fact-checking among audiences of a different contextual background who can detect such errors.

Context-Checking includes:

  • Time-dependency Awareness: Facts can be correct at one point in time and incorrect at another point in time; it is critical to be aware of this while researching a fact, and acknowledge this in fact-check articles.
  • Audience Analysis: Audiences have different prior knowledge, assumptions, sentiments, values, and beliefs. It is important to be aware of what different audiences know and assume about a fact, so that all angles can be addressed (especially with entrenched knowledge and cultural myths vs. new information).
  • Reputation Awareness: Some sources are considered by some audiences to be reputable and some are not. It is important to diversify sources and acknowledge the validity of reputation criticism.
  • Checking Assumptions about Trusted Sources: A fact-checker may implicitly trust Wikipedia or an elite University blog, and not bother to check the sources of what they cite as their evidence. In doing so, they are perpetuating inaccuracies that audiences can easily discover if they perform their own due diligence.
  • Political Implications Awareness: Facts that are checked may inadvertently support one high-level political narrative over another, and this can be detected by some audiences. For these types of fact-checks, there should be higher research standards and extreme accuracy in language.

Example:

This Politifact ‘fact-check’ rated the claim that “thirty-four of the 47 men depicted in the famous ‘Declaration of Independence’ painting were slaveholders” as “true.” While we believe Politifact did a great job of including some kinds of contextual nuance in their article, they have neglected or transgressed other contexts, including:

  • Assumptions about Trusted Sources: The researchers seem to have demonstrated some bias towards the reputation of what they consider to be sources of authority. Based on the data they provided, they identified Founding Father George Clymer as not a slaveholder “per Google Searching.” No further source was cited. Was the source of authority their own quick judgement? If that is the case, then this bias led to an outright error. A simple Google Search would reveal evidence which suggests that George Clymer did indeed hold slaves, and wrote a letter about exchanging his slaves with others. They wouldn’t be the only ones to get it wrong, however, since the Wikipedia page on George Clymer also appeared to be inaccurate about his slaveholder status (the Wikipedia text is directly contradicted by citations in its bibliography). We have since corrected the Wikipedia article, but you can see screenshots below:

Wikipedia Article:

A screenshot of a Wikipedia article, with the text highlighted: “Clymer was one of 34 Signers that did not own slaves.”
The archived Wikipedia page, with text highlighted reads: “Clymer was one of 34 Signers that did not own slaves.”

Webpage with Image of Original Source, citing in Wikipedia:

A screenshot of a website which auctions and sells historical artifacts with the text “The Declaration signer considers swapping slaves with Colonel George Morgan of Princeton, New Jersey.” highlighted. Referring to a letter written by George Clymer
The archived bidding page that includes photographs of Clymer’s letter, a part of Wikipedia’s bibliography, which reads: “The Declaration signer considers swapping slaves with Colonel George Morgan of Princeton, New Jersey.”

But that wasn’t the only contextual error we found in this article. There was also an error regarding —

  • The Context of Time: The fact itself (as they’ve phrased it) may only be true if the data is cherry-picked from certain historical time periods. The fact they checked was “thirty-four of the 47 men depicted in the famous ‘Declaration of Independence’ painting were slaveholders.” But does this mean that they were slaveholders at the time of the painting or that they held slaves at some point? Let’s take Benjamin Franklin as an example. They wrote that “yes” Benjamin Franklin would be considered a slaveholder in their tally. At one point in time Benjamin Franklin indeed held slaves, but at another he was the first President of an Abolitionist Society (this was also misattributed to Clymer on Wikipedia, which we also corrected) and petitioned Congress to end slavery. The assessment of Franklin is true for certain periods in his life and false in others. To Polifact’s credit, they did mention this in their article, however, it would still render the “fact,” as they’ve stated it, unclear. At the very least, this judgement of Franklin is inconsistent with the judgement of Clymer.

Why does this matter? We often reference the past in a general sense without specifying such details (i.e. Abraham Lincoln was President vs. Abraham Lincoln was a U.S. President from March 1861– April 1865), but this fact about Franklin is an example of how the context of political implication can be so important. These facts about the founders’ slaveholder status can be aggregated up to a larger discussion about the internalized racism and bias in our political system, and by being aware of that, we get clues as to what other details may be important to include when discussing it, so that one high-level political narrative isn’t inadvertently supported by cherry-picked data over a more nuanced narrative that is more inclusive of relevant detail. An easy correction here could be to add the words “at some point in their lives” (or some other accurate caveat) at the end of the stated claim so as to both be more accurate and not to offend nor turn off historically knowledgeable and political impassioned audiences.

  • Failure to analyze the audience well and understand political implications: This founding father fact-check was likely relevant because of the magnitude of the cultural conflicts over Founding Father statues and the racial tensions they symbolize. This claim exists in a political, historical, and racially sensitive context. However, this claim also exists in an entrenched cultural context (i.e. love for the Founding Fathers). Lack of sensitivity towards all of these contexts can lead to polarizing reactions to the fact-check which could be avoided with more research rigor and editorial care. This may be an especially important point, because according to a report covered by Poynter, “70% of Republicans believe fact-checkers tend to favor one side, while 29% of Democrats say the same” and 50% of Americans believe that fact-checkers “deal fairly with all sides.”

At the Society Library, we specialize in understanding the context of claims. We perform rigorous research to steel-man every point of view in the course of our analysis and we deconstruct various media types — so we can pick up on the opinions, sentiments, and beliefs of disparate audiences (expert and laymen). Sometimes, understanding the context of an audience is like a “joke you just had to be there for,” and our work includes being knowledgable enough of audiences so we’re in on every joke. Specialists, like The Society Library, can help fact-checkers understand these audiences’ views, so they can take that into consideration when performing their work. Failing to analyze and understand certain audiences well can also lead to this error:

  • Missing the Link: Sometimes, the claim being fact-checked is a misrepresented sentiment in a community. With a little “devil’s advocacy research” and “steel-manning,” a claim that is false as-is may actually have meaningful origins or be similar enough to “true facts” that are worth mentioning. For example, fact-checking Alex Jones about chemicals turning frogs gay is one thing, but with a little steel-manning and research, people may discover the evidence-based claim that the pesticide Atrazine “induced hermaphroditism and demasculiniz[ation]” in male frogs; according to a University of California, Berkeley study. So, even if fact-checkers correct the incorrect fact as stated, they may be perceived as ignoring a more accurate argument that could be assumed to be “what may have been truly meant” or “what he was talking about,” and that the speaker either misspoke or misremembered, as people do. Failing to steel-man claims like this may be perceived of as unfair to certain audiences and a “gotcha” strategy to ruin the reputation of people who some audiences trust for information. Though to be clear, we don’t know if this is what Alex Jones was referring to.

However, there is more we could improve:

Although fact-checks have the impossible job of trying to stave off the impact of misinformation, and therefore their more meme-able “true/false” graphics may be necessary, we have also seen fact-checkers conflate “no evidence” as “false” and “has evidence” for “true.” We would be interested to see if a more nuanced labels of facts could lead to improved trust in the work that fact-checkers are doing, if it is not worth doing so for the sake of accuracy alone. It would be helpful even to identify what kind and how much evidence a fact has.

Example of more nuanced (though likely less aesthetically pleasing) fact-checked labels

Fact-checking will likely continue to be an integral part of the information ecology for some time, but there are evolutions of the work that are already growing, including public content analysis. For example, the work of Public Editor, by Goodly Labs, is a distributed content analysis system that enables analysts to flag over 40 types of misinforming content and produces a score on the reasoning, language, and sourcing of articles that can be seen both in depth and holistically. Instead of working on a fact-basis, it works on an article-basis. See screenshot of example article below:

Screenshot from Public Editor example articles

Colored underlines, when hovered over, provide additional information about the text they correspond to, which gives users additional nuance about problematic reasoning or misleading language at the claim, argument, and sentence-level.

While we think it is inevitably necessary that fact-checking expands to “article-checking,” the work of the Society Library is really about expanding that further to “debate-checking.” (This is something that our friends at Goodly Labs are working on also, through their SamePage project).

Facts are small pieces of larger societal deliberations about critical issues, including climate change, COVID-19, and existential risk issues. Regardless, facts will need to be checked on the basis of their own accuracy, but their impact is not just on the mind of the individual reading them one at a time. They are relevant to a larger social discussion — they are relevant in a much broader context.

At the Society Library, we approach fact-checking as a “whole systems approach.” Facts can be as granular as “end-of-century ocean acidification levels have negligible effects on important behaviours of coral reef fishes” to as broad as “climate change will be catastrophic.” In our work, granular facts are effectively arguments, claims, and evidence that either support or refute broad facts like “climate change will be catastrophic.” While fact-checking traditionally results in “true/false” conclusions, our work is about broadening the context of evidence to its maximum so we can inform probabilistic reasoning about the likely “truth” of any broad claim or position in a large social debate.

Overall, we believe “debate-checking” is the logical end to the work of correcting misinformation, and it’s a monumental task. We hope to work with fact-checkers long-term to make this work more feasible, and if we do, we hope that context-checking techniques are more deeply integrated in the fact-checking process, so we can rely on increased accuracy in this incredibly important public work.

To learn more about our “debate-checking” work, see our program The Great American Debate. We also invite fact-checkers to learn more about context-checking at The Society Library, by writing to us on our website. We hope to work in partnership with you.

If there are any errors, misattributions, or fallacies in this article, we ask that it be brought to our attention immediately. The Society Library is committed to accuracy in our content and we’ve signed the ProTruth Pledge.

The Society Library is a 501(c)3 non-profit collective intelligence library that builds educational databases of knowledge by extracting arguments, claims, sentiments, and evidence from books, academia, news, the web, and other media. The Society Library offers consultation services to improve logical, unbiased decision-making and teaches meta-literacy based programs and argument modeling through a variety of educational internships.

--

--

The Society Library

A non-profit library of society’s ideas, ideologies, and world-views. Focusing on improving the relationship between people and information.