0. Overview - Platforms

Written by Vera Zakem, Senior Technology and Policy Advisor at the Institute for Security and Technology and CEO Zakem Global Strategies, Kip Wainscott, Senior Advisor for the National Democratic Institute, and Daniel Arnaudo, Advisor for Information Strategies at the National Democratic Institute


Digital platforms have become prominent resources for sharing political information, organizing communities, and communicating on matters of public concern. However, these platforms have undertaken a mix of responses and approaches to counter the growing prevalence of disinformation and misinformation affecting the information ecosystem. With a broad spectrum of communities struggling to mitigate the harmful effects of disinformation, hate speech coordinated influence operations, and related forms of harmful content, the private sector’s access to privileged and proprietary data and metadata often uniquely positions them to understand these challenges.


A number of prominent social media companies and messaging platforms are leveraging their abundant data to help inform responses to disinformation and misinformation campaigns. These responses vary widely in character and efficacy, but can generally be characterized as falling into one of the following three categories:

  1. policies, product interventions, enforcement measures to limit the spread of disinformation;

  2. policies and product features to provide users with greater access to authoritative information, data, or context; and

  3. efforts to promote a stronger community response and societal resilience, including digital literacy and internet access, to disinformation and misinformation

Many platforms have implemented new policies or changes in the enforcement of previously implemented policies in response to disinformation related to the COVID-19 pandemic, the 2020 U.S. Presidential Election, and the January 6th assault on the U.S. Capitol. With an increase of false information related to COVID-19, fact-checking increased 900 percent from January to March 2020, according to an Oxford University study.  The World Health Organization has characterized this spread of misinformation about COVID-19 as an “infodemic,” which incidentally occurred at a time of increased social media use as many people have been restricted to their homes during the pandemic. 

In addition, the 2020 U.S. presidential election inspired major platforms to update their policies related to coordinated inauthentic behavior, manipulated media, and disinformation campaigns targeting voters and candidates. Similarly, the January 6th attack on the U.S. Capitol has motivated social media platforms to once again reexamine and update their policies, and their enforcement, as it relates to disinformation and the potential risks of offline violence. 

This chapter examines platform responses in greater detail in order to provide a foundational understanding of steps social media platforms and encryption services use to address disinformation. Across all of these various approaches, it is important to note that social media policies and enforcement actions are constantly evolving as the threat landscape constantly changes. To better understand these changes, most prominent platforms, including Twitter, YouTube, and Facebook, publish regularly transparency reports that provide data to users, policymakers, peers, and civil society stakeholders about how these platforms update their policies, their enforcement strategies, and product features to respond to the dynamic threat landscape and societal challenges. To help account for these ever-moving dynamics, and as highlighted throughout this chapter, platforms have in many cases partnered with local groups, civil society organizations, media, academics, and other researchers to design responses to these challenges in the online space. Given the evolving nature of the threat landscape, the relevant policy, product, and enforcement actions, based on the information available as of the publication of this guide, are documented here.