5 minute read 5 Apr 2020
Man holding wooden cubes

How to identify fact from fiction during the COVID-19 pandemic and beyond

By Todd Marlin

EY Global Forensic & Integrity Services Technology & Innovation Leader

Global leader in technology & Innovation, with significant experience serving the financial services industry.

5 minute read 5 Apr 2020

Show resources

  • Can you trust what you read? (PDF)

While fake news has existed throughout history, the coronavirus (COVID-19) pandemic has amplified the issue and the harm it can do to society.

Fake news — the intentional spread of false information — erodes public trust, harms institutions and individuals, and confuses a public trying to make sense of an increasingly complex world.

Digital media companies have long been criticized for failing to remove obvious falsehoods from their channels. Another contentious election year in the US is putting renewed pressure on them, and coronavirus (COVID-19), perceived as the most significant pandemic in the age of social media, has only intensified the issue.

Panic over a deadly disease, misleading and conflicting information from government officials and partisan politics have sent a tidal wave of fake news swirling around the globe, as Can you trust what you read? (pdf) makes clear.

The 2019 CIGI-Ipsos Global Survey found 86% of internet users said they have been duped by fake news at least once. A Massachusetts Institute of Technology (MIT) study of Twitter from 2006 to 2017 found that fake news stories were 70% more likely to be retweeted than true stories, with the truth taking about six times as long to reach 1,500 people as falsehoods.

Falsehoods were spread farther and faster than the truth in all categories, with political news leading the way and business rumors coming in third. People have a predisposition to favor information that is more novel than accurate reporting or confirms what they already believe.

COVID-19 has been unprecedented in changing the way the world lives and works, so it’s not surprising that the virus has led to an outbreak of fake news.

Pandemic leads to an explosion in fake news

COVID-19 has been unprecedented in changing the way the world lives and works, so it’s not surprising that the virus has led to an outbreak of fake news, with people around the world sharing content on everything from lockdowns to tips for warding off coronavirus. Nearly half of Americans say they’ve read fake news related to the virus in some form of media, and nearly a quarter seemed to believe the false story that COVID-19 was intentionally created, according to a Pew Research Center survey.

Business leaders have also fallen victim to fake news in the wake of COVID-19. The chief medical officer of a US company sent an email to employees encouraging them to drink warm water to ward off the virus, after reading about the advice on a viral post, falsely attributed to Stanford University. The university denied issuing it, and epidemiologists say the advice is not valid. While the advice was not harmful, the episode embarrassed both the company and executive.

Well before the COVID-19 pandemic, many internet companies were taking steps to manage the risk of fake news, but those efforts have dramatically increased. Twitter changed its policies to remove tweets that run the risk of causing harm or panic, as well as tweets advising ineffective treatments for the virus. Facebook, which reported in March 2020 that more than half of the articles read on its site were about COVID-19, revised its algorithms to promote official accounts and remove false content. Google banned coronavirus-related apps from its smartphone store and ads from people trying to profit from the pandemic.

Well before the COVID-19 pandemic, many internet companies were taking steps to manage the risk of fake news, but those efforts have dramatically increased. 

Enhancing human review with machine learning

The deluge of fake news has been so overwhelming that fact-checking sites such as Snopes told its readers in March 2020 that it was unable to keep up due to resource constraint. The failure to control fake news, despite increased efforts, shows the need for better methods to detect it.

One common approach adopted by social media companies relies on humans (subscribers, contractors) to flag potential false content. A major limitation to human review is that flagging is subject to personal bias. Nor is it practical given the large volume of information in the media in today’s digital era. Inevitably, the use of artificial intelligence (AI) has come into focus in the fight against fake news.

A common AI-based hybrid approach builds on two classification models: content and social context. Content model analyzes topic distribution within the news article. However, the model itself is rarely used on its own because relying on content alone makes it difficult to differentiate intentional deception from bias.

The content model is often complemented by the social context model, which focuses on key aspects of the social network (e.g., followers, user characteristics, interaction and engagement history). The drawback of this approach is that analysis is performed based on the news content only. The limited scope can make it difficult to understand the broader context of the news and, potentially, the type of fake news.

Another approach championed by MIT focuses on news sources. Researchers from MIT’s Computer Science and Artificial Intelligence Lab and the Qatar Computing Research Institute developed machine-learning models to assess the authenticity or neutrality of news sources. The drawback of this approach is that while certain facts in an article may be fabricated or embellished, the overall point of the article may still be authentic.

A new way of thinking

In examining the fake news problem, it may be worthwhile to learn from government counterespionage efforts. Open-source intelligence (OSINT) is a key element of government counterintelligence strategies. OSINT refers to any information that can be legally gathered from free, public sources. The information can be about an individual or an organization.

Given its reliance on freely available data, OSINT can be compromised if an individual or organization falsifies information in the public domain. However, this can be addressed by cross-checking relevant pieces of information about an individual to look for inconsistency.

For example, by starting with an author’s social media information to find out about his/her work history, one can then use the work history to search the employer website and court records to verify the author’s online profile. Given the vast scope of data covered in OSINT, extensive cross-checking can be carried out to minimize the risk of false information.

By bringing in additional data from OSINT, particularly data outside of what is in the news content, the scope of analysis can be greatly increased, aided by machine learning and natural language processing technologies. The outcome is more comprehensive insights to help determine the authenticity and nature of news content, whether fake or not.

Using OSINT, an EY team has run through a series of tests using known misinformation. The preliminary results have shown very encouraging opportunities in commercializing this approach in organizations’ fight against fake news.

Both the public at large and businesses need to step up their efforts to proactively look for new ways to detect and contain fake news.

Organization-wide planning is important

Both the public at large and businesses need to step up their efforts to proactively look for new ways to detect and contain fake news, while making sure they don’t contribute to the problem at the same time.

There are some key actions to take to manage the risk of fake news:

  • Build a culture of integrity, compliance and ethics. If your organization has a reputation for ethical behavior, it stands to be damaged less by false claims of wrongdoing.
  • Develop a crisis management plan for dealing with potential risks as a result of damaging fake news. Stress-test the plan in worst case scenarios.
  • Strengthen employee education and build vigilance on detecting, transmitting and reporting fake news.
  • In addition to traditional news outlets, monitor social media and fake news sites to flag potentially damaging coverage.
  • Develop data-driven detection programs through innovative use of OSINT and AI technologies.

Show resources

Summary

There are several key steps to take, from building a culture of integrity, developing a crisis management plan, strengthening employee education, deploying monitoring services, to implementing open-source intelligence and AI technologies.

About this article

By Todd Marlin

EY Global Forensic & Integrity Services Technology & Innovation Leader

Global leader in technology & Innovation, with significant experience serving the financial services industry.