A new machine-learning program from Los Alamos National Laboratory (LANL) accurately identifies COVID-19-related conspiracy theories on social media and models how they evolved over time—a tool that could someday help public health officials combat misinformation online, according to a study that was published April 14 in the Journal of Medical Internet Research.
“A lot of machine-learning studies related to misinformation on social media focus on identifying different kinds of conspiracy theories,” said Courtney Shelley, a postdoctoral researcher in the Information Systems and Modeling Group at LANL and co-author of the study. “Instead, we wanted to create a more cohesive understanding of how misinformation changes as it spreads. Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods.”
The study, titled “Thought I’d Share First,” used publicly available, anonymized Twitter data to characterize four COVID-19 conspiracy theory themes and provide context for each through the first five months of the pandemic.
The four themes the study examined were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or has otherwise malicious intent related to COVID-19; that the virus was bioengineered or was developed in a laboratory; and that the COVID-19 vaccines, which were then all still in development, would be dangerous.
“We began with a dataset of approximately 1.8 million tweets that contained COVID-19 keywords or were from health-related Twitter accounts,” said Dax Gerts, a computer scientist also in Los Alamos’ Information Systems and Modeling Group and the study’s co-author. “From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets.”
Using the data collected for each of the four theories, the team built random forest machine-learning, or artificial intelligence (AI), models that categorized tweets as COVID-19 misinformation or not.
“This allowed us to observe the way individuals talk about these conspiracy theories on social media, and observe changes over time,” Gerts said.
The study showed that misinformation tweets contain more negative sentiment when compared to factual tweets and that conspiracy theories evolve over time, incorporating details from unrelated conspiracy theories as well as real-world events.
For example, Bill Gates participated in a Reddit “Ask Me Anything” in March 2020, which highlighted Gates-funded research to develop injectable invisible ink that could be used to record vaccinations. Immediately after, there was an increase in the prominence of words associated with vaccine-averse conspiracy theories suggesting the COVID-19 vaccine would secretly microchip individuals for population control.
Furthermore, the study found that a supervised learning technique could be used to automatically identify conspiracy theories, and that an unsupervised learning approach (dynamic topic modeling) could be used to explore changes in word importance among topics within each theory.
“It’s important for public health officials to know how conspiracy theories are evolving and gaining traction over time,” Shelley said. “If not, they run the risk of inadvertently publicizing conspiracy theories that might otherwise ‘die on the vine.’ So, knowing how conspiracy theories are changing and perhaps incorporating other theories or real-world events is important when strategizing how to counter them with factual public information campaigns.”
Read the study: https://publichealth.jmir.org/2021/4/e26527