Bots Manipulate Public Opinion Russia - Ukraine in Conflict

A severely damaged building

New research from the University of Adelaide has shown that social media bot accounts have been used in malicious campaigns to influence online discussion during the Russia-Ukraine conflict.

A “bot account” is a social media account that has been generated automatically and interacts with content in a pre-programmed way. They are created for many different purposes, including malicious activities like spamming links, installing viruses or malware, and even posting misinformation and disinformation.

In recent years, bots have risen to the forefront of cybersecurity research because of their increasing usage in political messaging during times of social unrest.

“In the past, wars have been primarily fought physically, with armies, air force and navy operations being the primary forms of combat,” says Joshua Watt, an MPhil candidate in Applied Mathematics and Statistics from the University of Adelaide’s School of Mathematical Sciences.

“However, social media has created a new environment where public opinion can be manipulated on a very large scale, especially using automatically generated bot accounts. As a result, these environments can be used to manipulate discussion, as well as cause disruption and overall public distrust.”

Now, Joshua and co-lead researcher Bridget Smart, who is also an MPhil candidate from the School of Mathematical Sciences, have conducted new research into how bot accounts have been deployed in the war between Russia and Ukraine.

They analysed over five million tweets that were posted to Twitter between late February and early March 2022, which contained the following hashtags: #(I)StandWithPutin, #(I)StandWithRussia, #(I)SupportRussia, #(I)StandWithUkraine, #(I)StandWithZelenskyy and #(I)SupportUkraine.

“We found that between 60 and 80 per cent of tweets using the hashtags we studied came from bot accounts during the first two weeks of the war,” said Joshua.

They also noticed that there were spikes in bot activity that associated with significant developments throughout the conflict. For example, there was a spike in bot activity immediately after Russia captured the first major Ukrainian city – likely created to bolster support for the move.

What’s next?

The analysis methods developed by the researchers could be applied to any polarising online discussion. These important insights could be used by social media sites to detect bots, and mitigate the impact that the accounts have to ensure that online discussion is not being influenced by bad actors.

With social media and online discussion playing an ever more important role in global conflict, this work also highlights that governments may need to develop stricter policies on how social media organisations operate in this context.

Tagged in Defence, cyber and space, featured