Skip to main content

Weaponized bots are coming for your mind, Canada

#22 of 84 articles from the Special Report: Democracy and Integrity Reporting Project
Twitter Bots
Bots exist within a larger ecosystem of disinformation and can be used for a variety of purposes, including distorting perceptions of public opinion by amplifying certain narratives. Photo: CC0/Artwork by Caroline Orr

Support strong Canadian climate journalism for 2025

Help us raise $150,000 by December 31. Can we count on your support?
Goal: $150k
$32k

A few days ago, I stumbled across a network of automated Twitter accounts posting about U.S. President Barack Obama removing Cuba from the terror watch list — something that happened four years ago. The sudden burst of activity surrounding an event from 2015 caught my eye, prompting me to take a closer look.

The accounts were all tweeting the exact same message. Most looked like they were created using the same template. It's still unclear why they were tweeting about an event from 2015 — it could have been a programing error or an attempt to jump on a topic that has re-emerged in the news lately. It's also possible that this was some sort of test-run for future activity.

This bot network, or botnet, appears to belong to the category of adding confusing but benign storms to the tempest of “fake news.” But bots are capable of much, much worse.

I’m a social scientist who’s been watching bot activity in the U.S. for nearly five years. I’ve seen bots rapidly evolve to become more sophisticated and human-like, and I've studied how they've shaped public perceptions and influenced voter behaviour. Now, I'm watching it happen in Canada.

When I say bots, I’m referring to automated social media profiles that are designed to look like human users. Specifically, I’m talking about Twitter bots, or accounts that have been programed to autonomously complete functions such as tweeting, retweeting, liking, direct messaging or following/unfollowing other accounts. Twitter bots can be fully automated, meaning that all of their actions are controlled by a computer program, or they can be partially automated — a type of account known as a "cyborg," which is run by a human but also uses automation to post faster, more frequently and more voluminously.

The U.S. has already seen the disruptive potential of bots, writes @RVAwonk. "I fear Canada may soon find itself in a similar position. I hope you'll learn from America's mistakes instead of repeating them."

Bots exist within a larger ecosystem of disinformation and can be used for a variety of purposes, including distorting perceptions of public opinion by amplifying certain narratives, sharing links to “fake news” articles or silencing particular voices by drowning them out.

In the U.S., bots played a particularly significant role in shaping online discourse ahead of the 2016 presidential election. About a fifth of the election-related conversation on Twitter was driven by bots, according to a study by researchers at the University of Southern California, and pro-Trump bots outnumbered pro-Clinton bots by about 7:1 in the final weeks of the campaign.

Most Twitter users were unable to distinguish bots from authentic accounts.

A 2018 working paper released by the non-partisan National Bureau of Economic Research concluded that pro-Trump and anti-Clinton bots could have influenced enough voters in key districts to swing the election in favour of Trump.

Based on what I've seen, I'm concerned that people and lawmakers still see bots as a nuisance, rather than a threat to democracy. I'm worried that we aren't even close to being prepared to deal with the next generation of automated accounts. Most of all, with a federal election approaching this fall, I expect to see an uptick in bot activity in Canada. I'm not sure lawmakers and the public are equipped to handle it.

No mere nuisance

Many Twitter bots are harmless, and some are even helpful. For instance, the Twitter bot @gccaedits sends out a tweet every time a Wikipedia entry is edited anonymously from a Government of Canada IP address. I even have my very own copycat bot, which replies to my tweets with a verbatim copy of the tweet it's replying to.

And even more nefarious-seeming bots are perhaps easy to dismiss as mere nuisances, scams that no one is really falling for. But that’s a mistake.

Yes, poorly designed bots are relatively easy to spot and ignore, but the most effective bots operate in the shadows, leaving very few footprints. These are the ones best able to exploit human vulnerabilities and biases, and thereby manipulate public perceptions on a mass scale — with chilling implications for society.

Bots take advantage of a cognitive bias known as the bandwagon effect, which describes our tendency to support something because it appears that a lot of other people support it, too. They give people, such as undecided voters, the impression that certain messages, ideas or political candidates have more support than they actually do.

One of the results of this is the skewing of search engines such as Google, which are programmed to pick up on trending topics. If bots amplify a topic enough on social media, Google's search algorithm will start including it in its autocomplete function. Eventually, this produces a feedback loop: As people click on the search terms and results that come up first, Google pushes the search result higher, driving more traffic.

One of the most interesting patterns I've observed is the high volume of bot activity in the early morning hours, 3 a.m. to 5 a.m. I first noticed this several years ago and eventually realized it was a strategy to amplify content when traffic is low and fewer people are online.

After pushing certain hashtags and keywords into Twitter's trending list, the bots dropped off just in time for people to wake up, see the content and start circulating it themselves. Most of these users likely had no idea they were amplifying bot-pushed hashtags and keywords. That's the mark of a successful bot campaign.

The appeal to political actors is obvious. Parties and candidates could hire companies to program bots to retweet, like and share their own content, or to flood the replies of an opponent's content with negative messages. An army of bots could also be used to co-opt trending hashtags associated with an opponent's campaign, either to jam the signal and disrupt efforts to organize, or to smear an opponent's supporters by injecting extreme, offensive or otherwise negative messaging into the mix. Third parties and foreign entities could do the same — with or without the consent of the political party or candidate they're promoting.

What we know about bots in Canada

The bottom line is that, if you use Twitter, it's safe to assume that you've been targeted by bot activity in one form or another. And if you actively discuss Canadian politics on Twitter, you should be prepared to be targeted even more intensely in the coming months.

To date, there has been relatively little discussion about bot activity in Canada, and only a limited number of studies have looked into the phenomenon.The social media analytics firm Mentionmapp Analytics, one of the few research firms doing such work, has estimated that nearly a third of accounts tweeting about Canadian politics exhibit signs of inauthentic activity.

One of MentionMapp’s recent studies found that a substantial proportion of accounts tweeting about Alberta politics may be bots. According to the study, which looked at nearly 3,000 Twitter accounts using the hashtags #abpoli and #ableg between Jan. 23 and Feb. 29, 2019, an estimated 29 per cent of the accounts exhibited signs of automation.

Another MentionMapp analysis found signs of suspected bots amplifying tweets from @fordnation, the Twitter account of Ontario PC party leader Doug Ford, ahead of the June 2018 Ontario general election. About one in five mentions or retweets of @fordnation came from accounts that tweeted more than 70 times a day on average — a possible indication of automation software, according to MentionMapp.

During the 2017 provincial election in British Columbia, MentionMapp found evidence of bot activity targeting Liberal Party incumbent Christy Clark. The activity surrounded one Twitter account, @ReverendSM, which frequently posted tweets using the hashtag "#bcpoli" and accusing Clark of corruption. MentionMapp's analysis identified at least 280 bots actively retweeting the initial account.

There is also evidence of bot activity surrounding the Kinder Morgan pipeline debate, including one instance of suspected bots being used to sway the results of an online public opinion poll.

This activity dates back to at least 2012, according to researchers Fenwick McKelvey of Concordia University and Elizabeth Dubois of the University of Ottawa.

McKelvey and Dubois also observed the phenomenon during Quebec's election that year.

One of the most active Twitter accounts during the 2012 Quebec election tweeted 11,000 times to support the nascent Coalition Avenir Québec (CAQ). But the account named CAQBot wasn’t human. It was a bot, an automated software program designed to mimic human interactions on social media.

I am currently studying Twitter activity surrounding major media outlets and politicians in Canada, looking for signs of inauthentic activity such as artificial amplification and fake followers. I hope to add to the body of research and to work being done by media outlets across Canada, because there are few systematic analyses of bot activity in Canada. One of the challenges is that bots are often trivialized in public discourse — something that McKelvey and Dubois warned about:

"When Canadians discuss bots, they are largely treated as a novelty: a journalistic experiment, a one-off hack or a blip on the electoral radar. But Canadians risk trivializing an important debate about the future of its democracy."

Canada's Elections Modernization Act (Bill C-76) lays out new rules for campaign advertising, with updated regulations to reflect the modern era of political advertising, much of which now takes place on platforms like Google, Facebook and Twitter.

Among other things, the law requires social media companies to maintain a database of all election advertisements placed on their platforms during the pre-election and election periods. The database must include the content of political ads, as well as the name of the person or entity who authorized it. The law also prohibits foreign entities from spending money to influence Canada's election. The purpose of these new rules is to increase transparency so voters know where information is coming from and who is trying to influence them.

While Canada's new election act accounts for paid advertising and promotion using the built-in features on social media platforms like Facebook and Twitter, there are many other ways to amplify and boost the visibility of content online. Right now, that's a huge grey area just waiting to be exploited.

The U.S. has seen the disruptive potential of bots, and we are still not prepared to deal with the problem. I fear Canada may be in a similar position — but I hope Canada will learn from America's mistakes.

The battle for your mind

Right now, most bots are only programed to perform a limited number of tasks. However, the technology exists to create a new generation of sophisticated self-learning bots, powered by artificial intelligence and evolutionary algorithms that would allow bots to refine their messaging and communication style based on the effectiveness of previous messages.

Once unleashed into cyberspace, these bots will essentially engage in ongoing A/B testing by altering characteristics like the content of messages, the language and tone used in messaging, the delivery mode and timing, and the features of the account itself. This process will be carried out over and over again, with different audiences and in different contexts. Each one of these test runs will produce countless data-points to be incorporated into the bot's algorithm and used to improve its performance.

Eventually, bots will be able to autonomously identify who is most vulnerable to specific types of propaganda and disinformation, as well as the optimal way to disseminate it.

The public got a glimpse of this dystopian reality in 2016, when the now-defunct data firm Cambridge Analytica harnessed data stolen from Facebook and used it to develop a weaponized propaganda machine that profiled voters and targeted them with military-style psychological warfare.

In 2016, voters in both the U.S. and the U.K. were caught entirely off guard by the sheer volume of digital manipulation they faced. With a federal election coming up in October, there’s good reason to believe that Canada may be next in line. However, by learning from recent events and preparing for what is to come, Canadians may just be able to avoid falling prey to the same tactics and techniques that have wreaked havoc on other democracies around the world.

A big part of that task involves understanding the threats we face in cyberspace. Yes, “Russian bots” are a threat — but not all malicious or strange activity on social media comes from Russian bots, and jumping to blame them for everything is not doing anyone any favours. Attributing every influence operation to automated activity from Russia means overlooking the many other types of digital manipulation that are distorting our information space and shaping our perceptions of reality. It also minimizes the very important role of human actors in the broader ecosystem of disinformation. And perhaps most perilously, if we are always looking at foreign actors as the culprit for social media manipulation, we risk overlooking the fact that in many instances, the threat is coming from inside the house.

Comments