photo_camera Geralt / Pixabay
Star, wave, gold
Mass Media, Propaganda, and Social Influence: Evidence of Effectiveness from Courchesne et al. (2021)
Janet Pauketat
Research Fellow
June 6, 2023

Edited by Michael Dello-Iacovo and Jacy Reese Anthis. Thanks to Spencer Case, Maryam Khan, and Jacob Peacock for their thoughtful comments.

Summary

The Social Science of Social Influence

Mass Media and Influence Operations

Influence Operation Effectiveness

Findings from Courchesne et al. (2021)

Expanding the Empirical Analysis

Effect Sizes

Exemplar Studies

Limitations

Additional Empirical Research

Future Directions

Appendix

Summary

Governments, farmed animal advocates, and AI safety advocates rely on social influence strategies to intentionally change others’ attitudes and behaviors; aiming to create new behaviors or disrupt existing socio-political structures. Large-scale organizational strategies, often conducted by governments or large non-governmental organizations (NGOs) that rely on mass media to change public opinion or behavior en masse, such as radio propaganda, social media messaging, and TV campaigns have been labeled influence operations (or information operations). Courchesne et al. (2021) analyzed 82 empirical studies of influence operations. They found that influence operations are effective.[1] In this blog post, I review literature on persuasion and social influence that bears on similar questions, such as that reviewed in Sentience Institute’s “Health Behavior Interventions Literature Review” (Harris 2020), which found limited but arguably robust evidence of small effect sizes. I unpack details of Courchesne et al.’s methodology, conduct an expanded analysis of the same studies, and consider the limitations of current research — finding that their conclusion holds, though limitations include: effect sizes are small; the studies are mostly on the effects of long-term (e.g., years) or short-term (e.g., days) exposure to traditional mass media; observational data can be unreliable; and the current literature may be heavily affected by publication bias. I conclude with suggestions to conduct meta-analyses, quantitatively assess publication bias, and consider epistemological assumptions such as homogenous treatment effects and the correlation between individual data and real-world collective behavior.

The Social Science of Social Influence

Humans are highly social. We have a need to belong to esteem-building groups and to have those groups be important and favored relative to other groups. We want others to adopt our preferred view of the world so that we can share reality. We are also a species of change. We change physically throughout life, the world around us changes, and we strive to make changes so that the future is worth living. Those working in advocacy aim to promote change towards their position (e.g., increasing plant-based product consumption, increasing pro-environmental behaviors, increasing support for policies that address systemic inequalities). Governments try to limit the influence of other countries’ misinformation campaigns within their own country. Politicians seek to limit the influence of their opposition’s messages while increasing the influence of their own messages. Much of this is made possible by social influence.

Social influence and persuasion have long been focal topics within psychological and sociological sciences, meaning that many empirical studies have sought to explain how, when, and why people can be persuaded to change. Harris (2021) evaluated the effectiveness of persuasive strategies for changing public opinion; taking a positive view of social influence as a way to shift people towards a prosocial position.

The American Psychological Association (APA) defines social influence and persuasion:

Social Influence:

Persuasion:

Dual Process Model of Persuasion:

Four reviews of the social scientific literature on persuasion and social influence evidence the breadth of research on social influence, persuasion, and attitude change at an individual-level. This research provides much mechanistic evidence explaining how, when, and why individual-level social influence and persuasion works. The reviews are summarized in Table 1 and cover research up to 2010. Reviews of the empirical research since 2010 have focused even more on mechanisms of influence like neural correlates, facial behavior, and emotion (e.g., this review on the neuroscience of persuasion).

Table 1: Summaries of Social Scientific Reviews of Social Influence, Persuasion, and Attitude Change

Title

Authorship

Relevant Points

Google Scholar “cited by” (21 February 2023)

Attitudes and Attitude Change

Bohner & Dickel, 2011

  • Attitudes vary in strength (i.e., durability, impactfulness), can be affected by situational cues, and can be ambivalent (i.e., positive and negative attitudes towards the same evaluative target).
  • People can be aware of their attitudes (i.e., explicit attitudes) or unaware of their attitudes (i.e., implicit attitudes).
  • Physical perception like feeling the temperature of a room can affect attitudes and cognition (i.e., embodied cognition).
  • Attitude change involves the retrieval of stored evaluations and the consideration of new evaluative information.
  • Attitude change can result from consideration of information available in different situations or from a change in the memory underlying the attitude.
  • The success of persuasive attempts depends on the processing resources available to persuasion targets. Processing resources are determined by persuasion targets’ motivation and ability to process a message.
  • Information presented earlier can bias the processing of subsequently presented information.
  • Thinking about one’s own thoughts (i.e., meta-cognition) about a persuasive message is likely to affect the persuasiveness of the message.
  • Attitudes can lead to biased information processing. People tune their messages to their audience’s attitudes which can lead to biased recall and evaluation. People are motivated to select attitude congruent information which can lead to selective information exposure.
  • Spontaneous behaviors are predicted better by implicit attitudes and deliberate behaviors are predicted better by explicit attitudes.

2,703

Attitudes and Persuasion

Crano & Prislin, 2006

  • Attitudes can be formed outside of awareness with evaluative conditioning (associating a valenced attitude object with a nonvalenced attitude object) and mere exposure methods but existing attitudes are unlikely to be changed with these methods.
  • Attitudes can be changed through heuristics like source expertise or peripheral cues like source attractiveness when persuasion targets are likely to be unmotivated and have little opportunity to think about the message.
  • Attitudes can be changed through systematic or elaborative analysis of persuasive messages that are logical, well-reasoned, and data-based when the persuasion targets are likely to be motivated and have the opportunity to think about the message.
  • Social consensus can increase the value of certain attitudes or positions and the likelihood that they are considered thoughtfully which has implications for the effectiveness of messages coming from a minority position.
  • Minority influence is more likely to be considered when it is consistently advocated and is very distinctive.
  • Attitudes can be changed indirectly by minority dissent shaping divergent thinking and increasing the quality of information processing.
  • Negative reactions to a persuasive message can lead to resistance to the message. This resistance may be strong or weak, depending on the effort and quality of any counterarguments used and on the perception of successful resistance to expert sources.
  • Using affect as a part of persuasive messaging may only be effective when the desired change is an attitude based more on affect than cognition.
  • Mood and emotion can influence the degree to which persuasive messages are processed, especially when those messages are self-relevant (e.g., for health-related persuasion). Emotions are used as information and persuasive messages may be more effective when the message framing matches the affective state of the target.

1,095

Social Influence: Compliance and Conformity

Cialdini & Goldstein, 2004

  • People have three key motivations that affect compliant and conforming behaviors: to have accurate perceptions of reality, to be socially connected, and to feel positive about themselves.
  • Affective states like moods, persuasive techniques that aim to disrupt and then re-frame, authority and the motivation to be obedient, and social norms can produce compliance in pursuit of accuracy.
  • Liking, reciprocity, and persuasive techniques that rely on making reciprocal concessions can produce compliance in pursuit of social connectedness.
  • Persuasive techniques that rely on self-perception and making commitments can produce compliance in pursuit of feeling positive about oneself.
  • Perceived social consensus, dynamic social systems, and goals automatically activated by a certain context can produce conformity in pursuit of accuracy.
  • Mimicry and attaining social approval can produce conformity in pursuit of social connectedness.
  • Social identification with a persuasive source, being aligned with the majority or minority, and deindividuation can produce conformity in pursuit of feeling positive about oneself.

6,541

Attitude Change: Persuasion and Social Influence

Wood, 2000

  • Influence studied from a persuasion paradigmatic approach have focused on individuals’ processing of detailed argumentation in minimally social contexts.
  • Influence studied from a social influence paradigmatic approach have focused on the source of the message in complex social contexts with complex social interactions.
  • Influence and persuasion can take place in public contexts where people think their responses are witnessed by others or in private contexts where people think their responses are known only to themselves. Classic perspectives held that only private conformity (i.e., ‘true’ internal and external change) evidenced effective persuasion. Research in the 1990s showed this to be unsupported, with public conformity (i.e., external change to better fit in a group) also leading to attitude change persistence.
  • People can be motivated to change by dissonance between their attitudes and behaviors and by the persuasion function (e.g., a self-concept motive) matching the function underpinning the message recipient’s attitude (e.g., a concern to express their values).
  • People can have multiple attitudes towards one attitude object (e.g., a dietary choice could be viewed as positive because it helps relieve the suffering of animals and as negative because it increases tension with family members).
  • Framing is one strategy for influence operations targeted at objects with multiple attitudes. There are too many attitudes to change but if the object itself is perceived to change, the influence attempt can still succeed.
  • Influence and persuasion attempts that are in line with targets’ motives are evaluated more favorably and prompt more thoughtful consideration resulting in motivated biased processing.
  • Moods and emotions can serve as information that affects information processing (e.g., extremely scary messages reduce processing and persuasion but low to moderately scary messages increase processing and persuasion).
  • Social group membership affects influence operations (e.g., prototypical group members are more influential)
  • Social consensus can provide subjective information about the validity of persuasive messages.

1,671

Mass Media and Influence Operations

Influence operations are large-scale – often mass media-based – highly strategic, usually organization-sponsored (e.g., governments, large NGOs), social influence or persuasion tactics meant to change or disrupt attitudes, behaviors, and institutions. Influence operations often target individuals but can serve to change both individuals’ behaviors (e.g., dietary choices) and institutions (e.g., political norms). Influence operations can include any action or strategy (e.g., TV dramas, text messaging, radio propaganda) in any domain (e.g., social issues, politics, health) that is employed to change any outcome (e.g., public opinion, political power, prejudice, moral inclusion).  They can be distinguished from small-scale interpersonal strategies to convince one or two individuals to change. However, large and small scale strategies are likely underpinned by the same social influence and psychological mechanisms.

The term “influence operations” has been used within political science and international relations to refer specifically to information operations as warfare, influences that advance national interests outside of national spaces, and competitive information (and misinformation) collection and dissemination particularly in governmental and military contexts. In this blog post, I follow Courchesne et al. (2021) and take a somewhat wider scope that includes non-military and non-government campaigns.

Attempts to change attitudes and behaviors are widespread amongst and between human social groups and are a part of human evolutionary history. Contemporary influence operations take many forms from messages passed through traditional mass media (e.g., radio and TV programs) to propaganda campaigns on social media to social influencers promoting brands online to bots digitally spreading misinformation. Below are some examples.

Influence Operation Effectiveness

Courchesne et al. (2021) evaluated the state of research on influence operations by sourcing and summarizing 82 articles with empirical studies, primarily from political science and international relations. This report was summarized by Bateman et al. (2021) for the Carnegie Endowment for International Peace.

I summarize their key findings, conduct an expanded analysis of the same articles, evaluate the evidence, and consider additional research largely from the psychological and sociological sciences.

Courchesne et al. identified articles from 1995-2020 in a search using Google Scholar and Princeton University library’s Articles+ database with a list of keywords available in Appendix A of their report. They found an initial pool of 16 articles and looked at articles that cited or were cited by those to build their sample of 82 articles. To be included, articles had to have at least one study that empirically investigated an influence operation that targeted a specific population and had credible statistics comparing outcomes for those who were exposed to the influence operation with outcomes for those who were not exposed.

Findings from Courchesne et al. (2021)

Expanding the Empirical Analysis

I analyzed the 82 articles sourced and summarized by Courchesne et al. using a coding scheme I developed to probe additional features of the research on influence operations.

First, I used keywords and intuition to code studies by 1) the domain of the outcome, 2) the study context, and 3) the influence operation type.

  1. Outcome domain
  1. Study context
  1. Influence type

Each study was coded with exactly one outcome domain, study context, and influence type. For example, a study could be coded as having a political outcome but not a social, health, basic science, or consumer outcome. Likewise, a study could be either contrived or real-world and have only one of the four influence types. This categorization of articles into one of four possible influence types is loose, building on Courchesne et al.’s framework separating traditional and social media. Some influence operations could be classified under multiple influence types. Misinformation and disinformation might be better considered as sub-types of traditional media and social media since misinformation and disinformation can be passed through traditional and social media. I classified each study under only one of these four types to emphasize the distinctions between these four areas of research and to increase clarity on whether the effect of the influence operation is due more to one influence type than another (e.g., the effect of the influence is due more to misinformation than to social media).

Second, I used keywords to code the studies based on three other features: data type, study features, and influence specifics.

  1. Data type
  1. Study features
  1. Influence specifics

Studies did not have to qualify as having any of these features and they could qualify as having multiple, even within one category. For example, a study with 100 participants that had contemporary data collection would not qualify as having archival data or big data. A study that tested the effects of WhatsApp messaging in a political campaign to increase votes for a candidate would qualify for three options under “study features”: examined a mechanism of influence operations, targeted individuals with messages, and tried to actively change individuals’ behavior. That same study would also qualify as having an “influence specific” of serving to actively shape power.

This coding scheme reflects my judgments as to which categories a study belongs to rather than a purely objective assessment. For instance, an experiment on Twitter could be coded as a contrived situation if it included randomization to a treatment or control group and interaction with researchers (e.g., this political polarization study). The same experiment could be categorized as having a real-world context because it occurred in real-time, in public, and on a real-platform. I categorized this study as real-world rather than contrived because contact with the researchers was limited, the treatment was not isolated from normal Twitter usage, and the treatment did not entail substantially different Twitter usage norms than usual. For situations like this, I categorized based on my judgment of the degree to which the study expressed the relevant features. Some of these choices could have been made differently and additional interpretation should consider the specifics of the study.

Table 2 shows the coding scheme, keywords, and number of articles categorized under each feature. The complete codebook, including article titles, is in the Appendix.

Table 2: Summary of the 82 Articles Categorized by Outcome Domain, Study Context, Influence Types, Data Type, Study Features, and Influence Specifics

Coding Scheme

Categories (Number of Studies)

Description of Category

Example

Keywords

Outcome Domain

 

 

 

 

 

Political (31)

Related to governance or the control of public affairs

The Effect of Fake News on Populist Voting: Evidence from a Natural Experiment in Italy

Politic, government, populist, vote, voting

 

Social (27)

Related to the operations of people or social groups

When and How Negative News Coverage Empowers Collective Action in Minorities

Public opinion, trust, protest, immigration, police, crime, Muslim

 

Health (12)

Related to physical or mental care and well-being

Impact of a Local Newspaper Campaign on the Uptake of the Measles Mumps and Rubella Vaccine

Health, vaccine, autism, condom, COVID, AIDS, pandemic, birth

 

Basic Science (11)

Demonstration of a scientific principle to build knowledge on how an observable process works

Brief Exposure to Misinformation Can Lead to Long-Term False Memories

Knowledge, affect, cognition, cognitive, memory, misinformation, fake news

 

Consumer (1)

Related to owning, purchasing, or using certain goods

A Tear in the Iron Curtain: The Impact of Western Television on Consumption Behavior

Consum*

Study Context

 

 

 

 

 

Lab-based or contrived (21)

A situation contrived by the researchers to elicit a hypothesized observable response

Does Emotional or Repeated Misinformation Increase Memory Distortion for a Trauma Analogue Event?

Experiment, manipulated, Qualtrics, Mturk, random assignment

 

Real-world (61)

A situation occurring naturally, not created by the researchers

Soap Operas and Fertility: Evidence from Brazil

Real-world, natural experiment

Influence Type

 

 

 

 

 

Misinformation (11)

Influence by means of false or inaccurate information that may be deliberately misleading

Exposure to Health (Mis)information: Lagged Effects on Young Adults’ Health Behaviors and Potential Pathways

Misinform, fake news, conspiracy

 

Disinformation (2)

Researchers specifically call the influence “disinformation”

Cognitive and Affective Responses to Political Disinformation in Facebook

Disinformation

 

Traditional Media (51)

Mass influence by means of traditional propaganda, often through TV, radio, print news, or speeches

Propaganda and Protest: Evidence from Post-Cold War Africa

Traditional, propaganda, radio, TV, mass media, broadcast, speech, campaign, advertis*

 

Social Media (18)

Influence by means of online social media networks like Facebook, WhatsApp, and Twitter

Exposure to Opposing Views on Social Media can Increase Political Polarization

Network, social media, Twitter, Facebook, WhatsApp

Data Type

 

 

 

 

 

Historic or Archival (27)

Data collected in the past and used to understand a historical influence operation

Radio and the Rise of the Nazis in Prewar Germany

Nazi, Cold War, East German*, archival, histor*, 199*, 198*, 197*, election, war

 

Big (9)

Complex data with greater than 1 million data points

A 61-Million Person Experiment in Social Influence and Political Mobilization

million, big data

Study Features

 

 

 

 

 

Mechanistic (20)

Examining how or why influence operations work in specific ways

Rise of the Machines? Examining the Influence of Social Bots on a Political Discussion Network

Memory, emotion, cognit*, conspiracy, collective action, framing, polarization, uncertain

 

Individual Messaging (18)

Using messages directed at influencing individuals to initiate change

Messages on COVID-19 Prevention in India Increased Symptoms Reporting and Adherence to Preventive Behaviors among 25 Million Recipients with Similar Effects on Non-Recipient Members of Their Communities

Message, induced, manipulated, manipulation, random assignment

 

Institutional Outcome (37)

An aggregated or collective outcome representing political or social institutions like political party power or social capital

Politician Hate Speech and Domestic Terrorism

Vote share, trust, public opinion, social norm, norm, protest, riot, crime, institution, aggregate

 

Natural Experiment (36)

Individuals exposed to different real conditions determined by real events not controlled by the researchers

Propaganda and Conflict: Evidence from the Rwandan Genocide

Natural experiment, natural, topograph*, geograph*, signal, exposure

 

Active Behavior Change (8)

Study actively trying to get individuals to act or change their behavior

How the Pro-Beijing Media Influences Voters: Evidence from a Field Experiment

Intended action, change, behavior, intention

Influence Specifics

 

 

 

 

 

Bots (6)

Influence operations that mention or use bots and algorithms

Digital Propaganda, Political Bots and Polarized Politics in India

Bots, algorithms, crawler

 

Moral Circle Expansion (4)

Influence could expand the boundaries of the moral circle

Erasing Ethnicity? Propaganda, Nation Building, and Identity in Rwanda

Conflict, violen*, identity, polarization

 

Intentional Good (6)

Influence to intentionally do more good on an issue

Evaluating the Impact of a National AIDS Prevention Radio Campaign in St. Vincent and the Grenadines

Health, prevention, immigration, race, change, social, conflict, voting

 

Active Power (12)

Influence actively or directly seeks to change access to power or control

WhatsApp and Political Instability in Brazil: Targeted Messages and Political Radicalisation

Politic, power

 

Passive Power (27)

Influence passively or indirectly seeks to change access to power or control

Electoral Effects of Biased Media: Russian Television in Ukraine

Politic, power, radio, TV, television

Effect Sizes

Most studies found that influence operations had a positive effect (i.e., the intended effect). There was little evidence of influence operations backfiring to produce change in the opposite direction. Three articles reported that influence operations backfired for a subset of people (i.e., the effect of influence operations was moderated by another factor).

  1. Bail et al. (2018) found that conservative, Republican identification prompted a backfiring effect of exposure to liberal perspectives on Twitter.
  2. Kao (2021) found that pre-existing China-skeptical attitudes prompted a backfiring effect of exposure to pro-Beijing media in Taiwan.
  3. Schmuck et al. (2020) found that disagreement with anti-Muslim information in Austria prompted a backfiring effect.

Most of the positive effects were small (see Table 3). Small effects may be as meaningful in one context as medium or large effects in other contexts. For instance, a small effect of exposure to real-world populist Facebook messages on increases in anti-refugee crime may entail more statistical error and a smaller effect size because of the complexity and uncertainty of the real-world social media context compared with the large effect in a contrived study of exposure to populist messages on perceived discrimination. Small effects can be meaningless artifacts of large sample sizes and large effect sizes can result from small sample sizes. Both small and large effects are meaningful and studies should be weighed in terms of impact and importance based on their methodology, sample size, context, and ecological validity in addition to effect size.

I evaluated effect sizes roughly based on standards within various fields (e.g., effect sizes in psychological science, persuasion rate in economics), my interpretation of statistics like % increase in vote share (e.g., 0% indicates no effect whereas 2% is larger than no effect but smaller than 20%), and whether or not the influence operation functioned independently of or was moderated by the presence of other factors (e.g., if the influence operation worked more strongly for men than women or if the influence operation depended on people holding a certain prior attitude or belief). Effects categorized as “none to small,” “small to medium,” or “medium to large” often entailed this sort of moderation. I was less confident in assigning effect sizes to influence operations that were primarily impactful in certain subgroups, preferring instead to indicate uncertainty.

Comparing effect sizes across disciplines and study contexts is difficult given the different statistics reported (e.g., percentage increase, percentage point increase, standardized and unstandardized regression coefficients), the different norms for interpreting effect sizes, and the different implications of effect sizes in different contexts. It is unclear how to meaningfully compare effects like a 15% increase in willingness to interact across ethnic group boundaries in post-genocide Rwanda following exposure to radio programming and a 2.6 percentage point increase in the vote share for an extreme nationalist party in Croatia following exposure to Serbian radio programming.

Table 3: Effects of Influence Operations in the 82 Articles

Effect Size

% of All Studies (n)

Study Context

% of Contrived

% of Real-World

Outcome Domain

% of Basic science

% of Consumer

% of Health

% of Political

% of Social

Influence Type

% of Disinformation

% of Misinformation

% of Social media

% of Traditional media

None

6%

(5)

0%

0%

50%

0%

0%

0%

8.2%

11.1%

6.5%

3.9%

11.1%

None to small

6%

(5)

4.8%

0%

0%

100%

0%

0%

6.6%

0%

6.5%

9.8%

7.4%

Small

41%

(34)

28.6%

18.2%

0%

0%

27.3%

33.3%

45.9%

38.9%

54.8%

47.1%

40.7%

Small to medium

15%

(12)

28.6%

36.4%

0%

0%

36.4%

25%

9.8%

16.7%

9.7%

9.8%

7.4%

Medium

16%

(13)

14.3%

9.1%

0%

0%

9.1%

8.3%

16.4%

11.1%

16.1%

19.6%

22.2%

Medium to large

11%

(9)

14.3%

27.3%

50%

0%

9.1%

25%

9.8%

16.7%

3.2%

7.8%

7.4%

Large

5%

(4)

9.5%

9.1%

0%

0%

18.2%

8.3%

3.3%

5.6%

3.2%

2%

3.7%

Note. These effects were found in the intended or positive direction.

Exemplar Studies

I highlight four studies that illustrate the complexity of influence operations and show how features of influence operations can co-occur.

  1. Blouin and Mukand’s (2019), “Erasing Ethnicity? Propaganda, Nation Building, and Identity in Rwanda,” took place in a real-world context and was a natural experiment featuring propaganda passed through traditional mass media with a social domain outcome. The research examined how radio propaganda targeted interethnic attitudes in post-genocide Rwanda to improve interethnic trust, increase willingness to interact across ethnic group lines, and decrease the salience of ethnicity. This research is one of only four articles featuring an influence operation interpreted as working to expand the moral circle and one of only six articles where the researchers studied an influence operation enacted to intentionally do good. This research also exemplifies the passive shaping of power.
  2. Yan et al.’s (2021), “Asymmetrical Perceptions of Partisan Political Bots,” took place in a contrived situation and was a natural experiment featuring social media influence operations with a basic science outcome. The research examined how Twitter users distinguished between partisan human users and partisan bots. This is an example of a mechanistic study because it focused on explaining how influences operations work. It is one of only six studies using or examining bots.
  3. González and Prem’s (2018), “Can Television Bring Down a Dictator? Evidence from Chile’s “No” Campaign,” took place in a real-world context and was a natural experiment featuring traditional mass media influence operations from a historical data set with a political domain outcome. The research examined the vote share won by Pinochet’s opposition in the 1970 Chilean election as a function of exposure to the opposition’s TV advertising campaign. The featured outcome was institutional and the study provided an example of an influence operation that passively shapes power.
  4. Banerjee et al.’s (2020), “Messages on COVID-19 Prevention in India Increased Symptoms Reporting and Adherence to Preventive Behaviors Among 25 Million Recipients with Similar Effects on Non-Recipient Members of their Communities,” took place in a contrived situation and featured a traditional mass media campaign with a health domain outcome. This research featured big data from a study where the researchers used individual messaging to actively change individuals’ behavior. This research was one of only six articles with an influence operation enacted intentionally to do good.

Limitations

  1. As with all empirical research, there is likely a publication bias or file drawer problem whereby influence operations appear more effective than if all the studies, including those with null or negative (i.e., backfiring) effects, were published. There is no commonly accepted way to judge the size of the existing file drawer. There are recommendations for combatting publication bias:
  1. Many of the studies included by Courchesne et al. (2021) were observational and correlational but still attempted to infer causality, primarily by controlling the statistical effects of confounding variables to increase confidence in the effect estimates.
  1. Only 24% of studies examined mechanisms for how influence operations work. This leaves a large gap in the research on why people can be influenced, how influence operations persist over time, and in what contexts or situations various influence operations are more or less effective, to name a few of many potential mechanisms. A preference for natural experiments or real-world studies may lead to fewer resources spent on mechanistic studies that occur in lab-based or contrived situations. There are some examples of natural experiments in real-world contexts that are also mechanistic studies. Barfar’s (2019), “Cognitive and Affective Responses to Political Disinformation on Facebook,” found that exposure to fake news produced more anger and incivility than exposure to true news which produced more analytical thinking, positivity, and anxiety.
  1. The conceptual and methodological distinctions between misinformation and disinformation are unclear. This is a general problem. There are not yet clear, standard, and commonly referenced distinguishing characteristics of misinformation compared to disinformation despite some attempts to distinguish the two based on intentionality. The research on influence operations uses the terms seemingly interchangeably and this limits the conclusions and strategies that decision makers can take. If misinformation involves more subtle tactics than disinformation, different anti-influence strategies might be developed. If misinformation is more common than disinformation, decision makers might want to focus on combatting misinformation. If disinformation has larger, stronger, or more severe direct effects, decision makers might want to focus resources on combatting disinformation.
  2. Influence operations may work differently for health outcomes than for other outcomes (e.g., political, social). Why, when, and how these differences occur is understudied. Courchesne et al. cited Wakefield et al.’s (2010) meta-analysis showing that influence operations conducted in the health domain (e.g., diet change, exercise change, vaccination behavior) are more effective for episodic or one-time health behaviors than for habitual behaviors. For example, influence operations were more successful at increasing yearly cancer screening or vaccination behaviors than habitual behaviors like choosing to routinely consume different food, floss daily, or increase daily physical activity. Furthermore, Wakefield et al. identified that repeating the same influence operation had only a small effect on habitual behavior change, consistent with some of the implications from Harris’ (2020) review of the research on health behavior change. This reifies that using the same influence repeatedly may reap few rewards for habits like flossing or switching to a plant-based diet.
  3. Although Courchesne et al. intended to review interdisciplinary influence operations research, they implicitly took a predominantly international relations perspective. The implications are that influence operations are successful and dangerous, particularly in political and international arenas, and that decision makers need better anti-influence strategies informed by research.

Of the 82 articles, 74% were conducted in the real-world rather than in a contrived situation and 48% focused on influence operations that sought to change the balance of political power. Only 7% focused on influence operations intended to do good, meaning that 93% focused on control-oriented, harmful, or destructive outcomes. This could suggest that there are more real-world influence operations with destructive or destabilizing outcomes than real-world influence operations with prosocial or morally expansive outcomes.

These numbers could alternatively point to a negativity bias within the extant research on influence operations, where influence operations are framed as threatening, negative, or unwanted influences that need to be combatted. One implication of this is that all social influence may be perceived as harmful; undermining the idea that social influence is an inherent, long-existing element of human societies. Ignoring the neutral, positive or prosocial influence operations perpetuates a cycle of knowledge whereby social influence and influence operations are cast in a negative frame. This could limit the scope and productivity of research on social influence and influence operations.

More studies on influence operations with prosocial or morally expansive goals exist. Courchesne et al. acknowledged a robust literature on “‘pro-social’ influence campaigns” in pro-environmental, health, and traditional political advertising. Additionally, a large body of social psychological research exists on strategies to change prejudiced attitudes and discriminatory behaviors. These strategies are not traditionally labeled as “influence operations” but they are similar to those included in many of the articles reviewed by Courchesne et al. I summarize some of this research in the next section.

Additional Empirical Research

Reducing prejudice and discrimination is one domain with substantial research on social influence and influence operations. Paluck et al. (2021) reviewed empirical studies of a wide range of strategies to reduce prejudice and discrimination or increase prosocial attitudes and behaviors. One of the studies highlighted in this meta-analysis found that Twitter bots that sanctioned racist Twitter users led to fewer subsequent racist slurs over the two-month period of study when the Twitter bot posed as an ingroup White man with many followers compared to an ingroup White man with few followers or an outgroup Black man (regardless of number of followers). This study provided evidence for two mechanisms of social media influence to intentionally do good: ingroup membership and popularity. Another highlighted article found in three different experiments that providing arguments alongside a non-judgmental exchange of narratives during conversations compared to providing just arguments reduced exclusionary attitudes towards unauthorized immigrants and transgender people for up to four months.

Pro-environmental behavior is another domain in which there is substantial research. Clayton et al. (2015) reviewed social scientific research on climate change and emphasized change strategies as one of three key research areas. For example, a field experiment to influence energy reduction in buildings on a university campus found that buildings where occupants received informative emails about their building’s energy use showed 7% reduced energy use compared to buildings in a control condition where occupants received no feedback. Another condition where occupants of other buildings received training in energy reduction strategies showed a 4% reduction in energy use compared to the control condition buildings.

Health behavior is a third domain with substantial research on the effects of persuasive strategies. Of the 12 health domain articles included by Courchesne et al., four investigated vaccine uptake and five investigated COVID-19 behaviors. There is much research on influence operations to change other health behaviors like smoking, adopting a veg*n diet, and flossing. For example,

  1. GiveWell reviews organizations that use mass media to influence health behaviors in developing countries. The Population Media Center (PMC) and Development Media International (DMI) are two organizations focused on improving health and wellbeing outcomes using influence operations. PMC documents the effects of narrative storytelling (e.g., TV soap operas, radio programs, web content) to encourage contraceptive use, family planning, and the education of girls and women in countries throughout the developing world. DMI uses mass media storytelling (mostly radio programming) to effect health and wellbeing outcomes like family planning, hygiene, early childhood development, and infectious disease prevention in sub-Saharan Africa.

DMI published two articles with evidence on the effectiveness of a radio campaign in Burkina Faso. One of these articles highlighted a cluster randomized trial testing the influence of a 32-month radio campaign about family behaviors compared to a no radio campaign on post neonatal under-5 child mortality. The radio program included short spots (i.e., 1 minute spots, 10 times a day) and longer interactive sessions (i.e., 2 hour sessions, 5 days per week) produced in the local language with several topics (e.g., promoting antenatal consultations with doctors, best breastfeeding practices, health-care seeking for illnesses like diarrhea and pneumonia). The authors found no effect of radio campaign exposure compared to no exposure. Both groups showed decreasing mortality rates over the multiyear study. The second article examined the impact of the same radio campaign on actual healthcare visits for reasons targeted by the campaign (e.g., antenatal care visits, under-5 care visits). The authors found under-5 consultations for illness increased across the years for those in the campaign compared to those not in the campaign (malaria: between 35-56%; lower respiratory infections: between 11-39%; diarrhea: 60-107%).

  1. Cruwys et al. (2015) reviewed experimental studies of social influence in eating behavior and found that the social modeling of eating behavior was effective in 64 out of 69 reviewed studies. For example, one of the reviewed studies found that exposure to a message conveying a descriptive norm about healthy eating choices resulted in increased healthy eating reports compared to a message conveying a descriptive norm about unhealthy eating or no message.
  2. Xiaoming et al. (2000) employed a RCT in semirural China and found that 18-30 year olds randomly assigned to either a 12-month multifaceted AIDS education campaign (i.e., educational text, videos, radio programs, small group discussions, home visits, individual counseling, and a free supply of condoms) or a control condition with no systematic exposure to educational materials found that those who received the campaign materials reported more knowledge of AIDS, more condom use after the campaign than before, and a greater use of condoms as their primary birth control device. Those who were not exposed to the influence operation showed no changes.
  3. Mathur et al. (2021) reviewed 100 studies that used individual messaging to influence meat consumption attitudes and behaviors. They found that these influence operations have a meaningful effect on the reduction of meat consumption and intentions to purchase meat products. The chronologically first study included in this meta-analysis found that 9-10 year old girls who watched an episode of The Simpsons promoting vegetarianism were less positive towards eating meat, more knowledgeable about nutrition, and intended to eat less meat following the episode than girls the same age who did not watch the episode.

Note. There is research on influence operations in public policy, consumer advertising, and biological conservation that I have not accounted for here. For example, Winter et al. (2015) found that negative argumentative and negative subjective comments on Facebook increased opposition to marijuana legalization compared to a control group that saw no comments. There was no effect of positive comments (whether argumentative or subjective) compared to the control group.

Future Directions

Of the research reviewed here, most studies found that influence operations are effective in producing some degree of intended change. This is consistent with the idea that social influence has been an important lever of change throughout human history.

However, this research has notable limitations. The reviewed articles paint a rosier picture of the effectiveness of influence operations, whether as destructive forces that need to be combatted or as prosocial forces that can be capitalized on, than may be the reality of large-scale organizational strategies. The reviewed studies cover individual and institutional levels of influence operation strategies and effects, mixing foundational knowledge about social influence at different levels. This comprehensive coverage could be beneficial in broadening our understanding of how social influence mechanisms explain influence operations. It could also be detrimental if there are different mechanisms underpinning successful individual-level and institutional-level change.

To address these, and other limitations, researchers could implement the following strategies:

  1. Assess publication bias in the existing literature
  2. Craft and implement interdisciplinary standards to reduce publication bias
  3. Conduct meta-analyses on the existing literature
  4. Synthesize from the existing empirical results to identify promising pathways for future research
  5. Conduct future object-level research

Future focus on the following topics would help to build the science of mass media, propaganda, and social influence:

  1. Systematic evaluation of epistemological assumptions
  1. Studies on complexity and effect direction
  1. Studies on impact

Appendix

Table A1: Codebook for the Expanded Analysis

Note. The codebook includes a summary of the judged effect sizes, notes summarizing the 82 articles, and a full breakdown of which articles were coded under each option of the coding scheme.


[1] Bateman et al. (2021) previously summarized Courchesne et al.’s (2021) report on the effects of influence operations written for Princeton University and the Carnegie Endowment For International Peace.


Subscribe to our newsletter to receive updates on our research and activities. We average one to two emails per year.