CMU researchers show most Twitter accounts tweeting about the coronavirus are bots
In a study that has gained recognition with national news outlets, researchers at Carnegie Mellon University found that much of the discussion on Twitter surrounding covid-19 is misinformation, fueled by bot accounts.
CMU researchers have collected more than 200 million tweets discussing coronavirus or covid-19 since January, according to a news release. Of the top 50 influential retweeters, 82% were bots. Of the top 1,000 retweeters, 62% were bots.
“In this pandemic, the level of disinformation we’re seeing is an order of magnitude higher than we’ve ever seen in the past,” said Kathleen Carley, a professor in the School of Computer Science’s Institute for Software Research and leader of the study.
Twitter bots are automated software that run a Twitter account, performing autonomous actions like tweeting, following and unfollowing and liking posts. Anyone can create a bot if they are familiar with the Twitter application programming interface (API). They’re most often used to share links and drive traffic to websites, according to the Pew Research Center.
Increased bot activity is common during natural disasters, crises and elections, Carley said. Disasters and calamities create fodder for people who want to create havoc, she said. Carley said people often create bots “just for fun,” and with the pandemic, bot creators have a lot of time on their hands. It’s too early, at this point, to determine the exact motive behind all of these bot accounts.
“We do know that it looks like it’s a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that,” she told NPR.
The covid-19 pandemic has spurred more misinformation than any world event in recent memory, and bots often are behind it, the CMU study found. Carley said during natural disasters, there are usually around 10 to 12 false narratives that gain traction on social media. During elections, there may be up to 50. But during covid-19, Carley’s study found at least 166 misinformation stories circulating on Twitter.
The surge could be attributed to the fact that this is a “once-every-100-years disaster,” Carley said. The coronavirus took everyone by surprise, and the magnitude of its impact is fueling more false narratives. Another factor, she said, is the global nature of the pandemic. Misinformation is coming from all over the world, placing roots in every other country.
“As human beings, people are more afraid, they’re more worried, they’re not getting a lot of information,” she said. “So they’re going to grasp at any information that comes. And if they’re too afraid, they’re going to be responding emotionally.”
The team used a variety of methods to identify whether an account was a bot, according to the release. Using artificial intelligence, they could process account information and look for things such as the number of followers, frequency of tweeting and the account’s mention network. Carley said they also used several measures to determine which retweeters were the most influential, benefiting from network connections to spread misinformation.
It gets complicated, Carley said, because not all bots spread misinformation — some are retweeting messages from the Centers for Disease Control and Prevention or the World Health Organization. That makes it even harder for casual social media users to know what is real and what is not, she said.
Carley said some of the most common conspiracy theories the team came across were the idea that covid-19 was created as a bioweapon, or that the disease was never real, just a “ruse” by the government to control the population. There are also several fake health remedies circulating, Carley said — everything from drinking bleach to green tea. All of the preceding claims are baseless.
For Carley and her team of researchers, immersing themselves in the sea of false narratives could become exhausting.
“Especially if they get really negative and hateful and very ugly,” she said. “In which case, I tell my students, ‘If it’s bothering you, just don’t do it.’ We just try not to stay too close to some of the data.”
There’s also evidence to suggest online conversations about “reopening America” have been orchestrated, with a large number of bots dominating the conversation, many of which are accounts that were created only recently. A lot of the “reopening America” tweets include unfounded conspiracy theories, such as hospitals being filled with mannequins or the virus being caused by 5G cell towers, a technology that features super-fast mobile connectivity. Telecommunication companies like Verizon and AT&T have been rapidly expanding 5G towers across the U.S.
With the virus becoming more politicized each day, Carley said the coronavirus-related misinformation is creating division at multiple levels and different ends of the political spectrum. But these stories are especially worrisome compared with misinformation during other world events, Carley said, because this is a public health crisis. Denying the legitimacy of the virus or promoting false treatments can cause people physical harm, she said.
Other social media platforms, including Facebook, Reddit and YouTube have been added to the research. Twitter announced last week that it would label misleading or disputed tweets about the virus, though the company told CBS News that it would not take enforcement action on every tweet in question. Twitter told NPR that it has removed thousands of tweets containing misinformation. But Carley said that may not be enough.
“There’s a lot of misinformation out there, and just because companies are trying to ban it doesn’t mean it’s going to be cleaned up, because we’re seeing it resurface,” she said.
That doesn’t mean there is no solution, though. Carley said there are a lot of things that can mitigate the spread of misinformation and make people more resilient and able to “think around it.” She encouraged social media users to always think critically and verify information with credible sources.
“If it sounds too good to be true, it’s probably false,” she said. “If there’s an emergency response that sounds so draconian and harsh, it’s probably false. Go to a credible source if you see these extreme things.”
Remove the ads from your TribLIVE reading experience but still support the journalists who create the content with TribLIVE Ad-Free.