Earlier this month, I submitted my Master of Arts Dissertation Proposal in the Department of War Studies at King’s College, London with a working title of “Defending the feed: How states can deter, detect, and counter disinformation campaigns from intelligence agencies and non-state actors seeking to amplify discord and influence elections.”
Here’s the introduction to my proposal as submitted:
Governments have long sought, covertly or overtly, to influence elections and components of democratic society in other countries to achieve specific political objectives. Those strategic political objectives may include regime change, a change in international relationships, or simply to destabilize or distract a perceived near-peer competitor. Historically, these efforts have involved financing an opposition candidate, broadcasting or publishing propaganda, instigating unrest, or other means of covert action.
The rise of the internet and major social media platforms such as Twitter, Facebook, and Instagram, paired with instant communication tools such as Facebook Messenger, WhatsApp, and WeChat, have created new opportunities for intelligence agencies and non-nation state actors to engage in disinformation campaigns on a vast scale. Enabled by these new technologies, agencies and actors can create viral stories, videos, and memes that rapidly spread disinformation across social media platforms. The result of these disinformation campaigns could range from a user sharing a meme filled with false information to his/her friends, hundreds of individuals showing up at a protest and creating physical conflict between opposing political groups on a hot-button social/political issue, or directly influencing the outcome of a presidential election in the United States.
Intelligence agencies and aligned non-state groups have been engaging in such disinformation campaigns through online means for some time. As highlighted in Special Counsel Robert S. Mueller’s report on Russian interference in the 2016 elections in the United States, the Internet Research Agency (IRA), an aligned non-state group in Russia, engaged in a sophisticated social media disinformation campaign to intensify political discord and influence the outcome of the Presidential election in favor of Russia’s preferred candidate. Russia’s Main Intelligence Directorate of the General Staff of the Russian Army, the GRU, supported these efforts through hack and dump operations aimed at the Clinton campaign, the Democratic National Committee (DNC), and other Democratic Party aligned organizations.
This dissertation will assert that intelligence agencies and non-state actors will continue to engage in disinformation campaigns as they are relatively low-risk and inexpensive operations with the potential for outcomes that favor the aggressor. It will assert that traditional methods of deterring these actions are mostly ineffective and that new approaches will be required. It will also assert that intelligence and counterintelligence agencies are mostly ill-prepared to detect such efforts by other countries.
This dissertation will argue that new strategies for deterrence and detection of these types of information warfare campaigns are required. Stronger intelligence sharing and coordination between public and private sectors will be necessary. Gaps in the skillset of the current and future generations of intelligence analysts must also be acknowledged and addressed.