Written by: 

Roman Osadchuk, Eurasia Research Assistant, Atlantic Council’s Digital Forensic Research Lab

Jakub Kalenský, Senior Fellow, Atlantic Council’s Digital Forensic Research Lab

 

What is disinformation?

Academics and researchers[i] oppose the use of the term “fake news” due to its vague and broad nature, and distinguish between three concepts of information addressing various kinds of false information: disinformation, misinformation, and malinformation.

While many definitions of disinformation exist, most of them agree that this sort of information is “deceptive”[ii] and “intentional”. Claire Wardle and Hossein Derakhshan define disinformation as false information that is “deliberately created to harm a person, social group, organization or country”[iii].

On the other hand, misinformation is false information that was not created to do harm or was disseminated without the knowledge about the fraudulent nature of its message. An example of misinformation might be a photo with a wrong capture or a satire that is taken seriously[iv].

Malinformation is factual information acquired through hacking, leaking, or another illegal act, with the clear intent to harm a person, entity, or country. Leaked emails or confidential documents fall into this category. While this category of information is the rarest of all three, its effects might have a devastating impact on an actor’s reputation.

While mis- and disinformation are problems on their own, some state and non-state actors could use them for political or military gains. One of the most active countries to exploit and use them as a tool for information warfare is Russia. According to Keir Giles[v], Russia perceives information warfare as an “ongoing activity regardless of the state of relations with the opponent”, which means that Russia constantly employs various instruments and engages in activities to influence the perception and behavior of its own and foreign populations and communities of enemies.

Disinformation is ultimately not a problem that might be solved by a single actor, such as the media, fact-checking organizations, online platforms, or governments. It requires a whole-of-society approach to diminish the effects of disinformation at multiple levels. Some countermeasures will only be effective in some societies or for a certain part of an audience. However, this does not mean that these measures are inappropriate—it merely means that we will need more of them. Much like disinformers use multiple channels and narratives in order to reach as many segments of an audience as possible[vi], actors countering disinformation also need to try adopting many different approaches.

Four lines of defense

Governments might assist the ongoing efforts to slow the spread of disinformation, which poses a threat to multiple spheres of human life, ranging from elections[vii] to finances[viii] and public health[ix], as demonstrated by the COVID-19 pandemic.

To offer a more holistic approach, four basic “lines of defense” to counter hostile information operations have been developed, all of which should be applied in a coordinated manner.

  1. Document the threat, in order to collect more data on the threat and gain a stronger understanding of it in the information environment.
  2. Raise awareness about the threat, to reach as many audiences as possible, making them aware of it and, thus, inoculated against it. In contrast to Line 1, which seeks to obtain more information about the threat, Line 2 attempts to ensure that more people have at least basic information about it.
  3. Repair, mitigate, and prevent weaknesses found within the information system, in order to decrease the ease of exploitation by attackers and make their target smaller and harder to hit. This line of defense should mitigate the effect of the threat on the target.
  4. Limit, challenge, constrain, punish, and deter information aggressors. This line of defense, unlike the above ones, is not directed at the victims of information aggression but aims to decrease aggressors’ desire to be aggressive instead.

Within each of these lines, there are various tactical countermeasures. To underline the above—none of them will solve the entire problem. For instance, focusing only on the victims of information aggression (Lines 1–3) while excluding the part about limiting and punishing the aggressor will only mean that the aggressor will have to adapt to the changing environment, but their willingness to be aggressive will remain unchanged—which is exactly what we are witnessing at the moment[x].

Line 1: Documenting the threat

Documenting disinformation, identifying it, collecting occurrences of it—this is the very basis for countering disinformation. It is a measure that is imperative, without which nothing else is possible in an informed way; at the same time, this is a measure that is still heavily underestimated.

Without documenting cases of disinformation, it is impossible to talk about a disinformation campaign, as we cannot provide a summary of cases, which is an essential part of a campaign. It is impossible to talk about an increase or decrease in disinformation—if we do not document what the basis is and if we do not know what the norm is, we will never be able to describe a deviation from the norm. It is impossible to measure how many people believe this or that piece of disinformation because there is no case documented that we could measure. Without debunking a false story, it is impossible to talk about disinformation, which is defined as an “intentionally spread false story”, since the “false” in the story has not yet been identified. The fear that debunking disinformation in fact strengthens a false story instead of deconstructing it seems to be unfounded [xi] [xii].

Without properly documenting and understanding the problem, we are just fighting in a fog, akin to being in a war and not knowing how many tanks and missiles the enemy controls or how many people per day they manage to kill.

Line 2: Raising awareness about the threat

A lot of the work anticipated in Line 2 will be carried out by doing the first line properly. Documenting the threat will provide the “ammunition” for raising awareness about it. However, the aim of Line 2 is different—in Line 1, we try to gain more information about the problem for specialists and policymakers, while, in Line 2, we try to broaden general knowledge about the threat and reach as many people as possible. For that, communication experts are necessary, and the involvement of numerous varied voices is required, to reach many different audiences.

Some methods of communication may work effectively in only a part of a given audience and may be ineffective in the case of another—but this does not justify refraining from implementing these various methods. As journalist Anne Applebaum wrote, prior to the onset of COVID-19 in Italy, about the anti-vaccination disinformation campaign, different approaches to raising awareness about a threat will reach and impact different people, but this does not make one approach better or worse than another[xiii]. We just need to try adopting more different approaches, thus reaching different target groups.

Line 3: Preventing and repairing weaknesses in the information system

Line 3 slightly overlaps Line 2 in the sense that, by raising awareness effectively, we can minimize some of the weaknesses found and exploited within the information system. However, to overcome some weaknesses, more than only increased awareness is required.

Among the most exploited weaknesses used by disinformation, actors are the existing tensions within target societies. Multiple pieces of research document how Kremlin propaganda abused racial tensions during the Cold War[xiv], the recent US elections[xv], generational gaps, urban and rural divides, the LGBTQ+ community, or historical tensions within countries[xvi]. Any topic with the potential to polarize audiences and evoke emotions will effectively diverge discussion from the rational realm and make an audience more susceptible to manipulation[xvii].

Constant investigation and evaluation of the techniques exploited by aggressors is crucial for the success and decision-making for the weaknesses’ mitigation. However, it is not enough to overcome the problem of disinformation itself because perpetrators will always find new ways to influence target audiences. Therefore, it is essential to limit the willingness of aggressors to conduct such attacks.

Line 4: Punishing, deterring, and limiting information aggressors

If acts of aggression are kept unpunished, aggressors may hit the victim multiple times. While the victim must deal with aggressive disinformation acts, the lack of punishment encourages aggressors to continue their approach due to the low price for such actions. Finally, the inaction regarding such aggression could induce other actors to behave similarly and increase the risk of aggressors starting even more disinformation campaigns.

The easiest of the nonlegal steps is the “name and shame” strategy. This process may discourage some information aggressors, or, at least, it may also help raise awareness about a given disinformation campaign and broaden audiences’ understanding of the threat.

Individuals and organizations can ignore the “pseudomedia”, which serves as a gateway for various disinformation campaigns. If an outlet consistently spreads disinformation, there should be no incentive to legitimize it with comments or interviews or treat it as a credible source that occasionally makes mistakes. There are many other actions and steps[xviii] that could be taken on top of the measures mentioned in this article.

Conclusions

There is no silver bullet solution to combat disinformation, other than coordinated efforts from multiple actors—governments, the media, and the civil society. These measures might focus on four different areas—documenting the threat, raising awareness about it, preventing and repairing weaknesses, and punishing and deterring information aggressors. These steps could help both understand and limit the influence of disinformation campaigns and make societies more resilient to them.

 

 

References

[i] Caroline Jack. (n.d.). Lexicon of Lies: Terms for Problematic Information. Data & Society Research Institute. https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf

[ii] Krafft, P. M., & Donovan, J. (2020). Disinformation by Design: The Use of Evidence Collages and Platform Filtering in a Media Manipulation Campaign. Political Communication, 37(2), 194–214. https://doi.org/10.1080/10584609.2019.1686094

[iii] Claire Wardle & Hossein Derakhshan. (n.d.). Information Disorder: Toward an interdisciplinary framework for research and policymaking (DGI(2017)09; p. 110). Council of Europe. https://rm.coe.int/information-disorder-report-version-august-2018/16808c9c77

[iv] Claire Wardle & Hossein Derakhshan. (n.d.). Information Disorder: Toward an interdisciplinary framework for research and policymaking (DGI(2017)09; p. 110). Council of Europe. https://rm.coe.int/information-disorder-report-version-august-2018/16808c9c77

[v] Giles, K., NATO Defense College, & Research Division. (2016). Handbook of Russian information warfare.

[vi] Kalenský, J. (2019a). Russian Disinformation Attacks on Elections: Lessons from Europe. E. Foreign Affairs Subcommittee on Europe, Energy, and the Environment https://web.archive.org/web/20210509092416/https://docs.house.gov/meetings/FA/FA14/20190716/109816/HHRG-116-FA14-Wstate-KalenskJ-20190716.pdf

[vii] Ian Vandewalker. (2020). Digital Disinformation and Vote Suppression. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/digital-disinformation-and-vote-suppression

[viii] Ben Nimmo, D. B. (2018, January 12). South Africa: Fake News, Financial Impact. https://medium.com/dfrlab/south-africa-fake-news-financial-impact-3f0599e6bfd8

[ix] Luiza Bandeira. (2020, May 27). Empty hospitals, fake burials and chloroquine: Systemic disinformation downplays COVID-19 in Brazil. https://medium.com/dfrlab/empty-hospitals-fake-burials-and-chloroquine-systemic-disinfo-downplays-covid-19-7eb91d784165

[x] Snegovaya, M., & Watanabe, K. (2021). The Kremlin’s Social Media Inside the United States: A Moving Target. F. R. Foundation. https://web.archive.org/web/20210509090339/https://www.4freerussia.org/the-kremlin-s-social-media-influence-inside-the-united-states-a-moving-target/

[xi] Ibid

[xii] Swire-Thompson, B., DeGutis, J., & Lazer, D. (2020). Searching for the Backfire Effect: Measurement and Design Considerations. Journal of Applied Research in Memory and Cognition, 9(3), 286-299. https://web.archive.org/web/20210509101408/https://www.sciencedirect.com/science/article/pii/S2211368120300516

[xiii] Applebaum, A. (2019). Italians decided to fight a conspiracy theory. Here’s what happened next. washingtonpost.com. https://web.archive.org/web/20210510093738/https://www.washingtonpost.com/opinions/global-opinions/italians-decided-to-fight-a-conspiracy-theory-heres-what-happened-next/2019/08/08/ca950828-ba10-11e9-b3b4-2bb69e8c4e39_story.html

[xiv] Rid, T. (2020). Active measures: The secret history of disinformation and political warfare. Farrar, Straus and Giroux.

[xv] Green-Riley, N., & Stewart, C. (2020). A Clapback to Russian Trolls. theroot.com. https://web.archive.org/web/20210510095401/https://www.theroot.com/a-clapback-to-russian-trolls-1841932843

[xvi] EUvsDisinfo. The Strategy and Tactics of the Pro-Kremlin Disinformation Campaign. EUvsDisinfo.eu. http://web.archive.org/web/20210508100826/https://euvsdisinfo.eu/the-strategy-and-tactics-of-the-pro-kremlin-disinformation-campaign/

[xvii] Ibid

[xviii] Kalenský, J. (2019a). Russian Disinformation Attacks on Elections: Lessons from Europe. E. Foreign Affairs Subcommittee on Europe, Energy, and the Environment https://web.archive.org/web/20210509092416/https://docs.house.gov/meetings/FA/FA14/20190716/109816/HHRG-116-FA14-Wstate-KalenskJ-20190716.pdf