Looking for Stratfor analyses? They can now be found on worldview.stratfor.com.

fellows

Jan 22, 2018 | 21:15 GMT

4 mins read

Automating Influence: The Manipulation of Public Debate

Global Fellow
Andrew Trabulsi
Global Fellow
solarseven / iStock / Getty Images

When the FCC opened the public comment period for its proposed repeal of net neutrality last April, few could doubt the intensely debated policy change would be controversial.  Even fewer, perhaps, could have anticipated—beyond the policy’s potential impact on the Internet—why.

Revealed just a day before the FCC’s vote, an investigation led by New York Attorney General, Eric Schneiderman, found that more than two million of the comments received by the FCC could have used stolen identities. In response to the FCC’s ensuing 3-2 decision to repeal net neutrality protections, Mr. Schneiderman vowed to lead a multi-state lawsuit to stop its reversal. “The FCC’s vote to rip apart net neutrality is a blow to New York consumers, and to everyone who cares about a free and open Internet. […] That’s why we will sue to stop the FCC’s illegal rollback of net neutrality,” Schneiderman noted in a statement following the FCC’s decision.

The fraudulent use of stolen identities, however, was only part of the dilemma stirring controversy on the issue.

Reporting led by the Wall Street Journal and supported by Quid, an artificial intelligence company based in San Francisco, uncovered something more unnerving: within thousands of comments, libraries of words and sentences had been adroitly, programmatically generated to appear as original public opinion urging policymakers to act. According to the report published by the Wall Street Journal, and corroborating data and research from data scientist, Jeff Kao, approximately 1.3 million of more than twenty-two million comments that the FCC received were created by automated accounts.

“We found crazy anomalies in the data that told us something wasn’t right,” said Carlos Folgar, an analyst at Quid who led research on the project. Using Natural Language Processing, a form of artificial intelligence, Quid looked for semantic similarities within the data, to identify trends and duplicate syntax. After identifying and removing almost 740 comment templates, mostly shared by organizations to simplify the commenting process for citizens, Quid analyzed the remaining comments by the length of sentences. “You would expect a normal distribution of comments within the data, because some short sentences, like ‘I urge you to act’, would be used frequently. Instead, we found that lengthy sentences—sometimes four or five lines long—would be duplicated across thousands of comments.”

Instances where extremely complex comments showed high semantic similarity across the dataset, with slight changes to wording or punctuation, indicated that the FCC’s public comment system had been gamed. Coupled with the use of stolen identities, the attack has raised questions about the fairness of the FCC’s ruling.

While the application of influence campaigns for the purposes of altering public discourse and politics are rapidly gaining prominence, it’s a field, as discussed in previous writings, with a detailed past. Tasked with winning the moral high ground during World War I, the famed British propaganda unit, Wellington House, distributed evocative, explicit reports of German barbarism to bolster public opinion in favor of the Allies. At a time before the interminable ubiquity of Internet communications, this tactic proved invaluable in securing the attention and support of the Allies’ war effort, improving recruiting and persuading neutrals.

Today, we see such tactics—albeit in different, non-traditional fashions—show up across case studies, for both political and commercial purposes. In a similar vein to the manipulation of public commentary on net neutrality, when the Consumer Financial Protection Bureau proposed adjusting its rules to stunt abuse within payday lending markets, it too was berated with an abnormal number of duplicate and semantically similar comments.

When held in comparison to broader state-backed operations aimed at shifting elections, the trend poses serious challenges to democracy. “Very important issues are asking the public for their perspective and our policymakers aren’t getting the full picture,” says Mr. Folgar. “This is data that is supposed to be representing the public voice, but instead it’s being manipulated."

As political and economic issues become increasingly influenced by technological engagement, it’s becoming more and more important to understand the limitations and vulnerabilities of such systems. While there may be no immediate or long-term solutions to the challenges such issues pose, I look forward to discussing and reviewing how these tactics continue to be used, and what it may mean for the broader geopolitical arena.

Stratfor
YOU'RE READING
Automating Influence: The Manipulation of Public Debate
CONNECTED CONTENT
1 Geo |  2 Topics 
SHARE & SAVE

Copyright © Stratfor Enterprises, LLC. All rights reserved.

OUR COMMITMENT