A group of researchers from the University of Sheffield developed a new AI system able to detect when social media users will be spreading disinformation before they even share it.
The research included an analysis of over 1 million tweets from around 6,200 Twitter users. The researchers collected posts from news media accounts on Twitter as either trustworthy or not and classified them into four categories from satire, propaganda, hoax, and clickbait. Then, they looked up the most recent tweets for these sources and filtered them.
The researchers put Twitter users into two categories: those who use unreliable sources several times and those who only repost news from trustworthy sites.
With all of this information, the scientists trained a series of AI models to predict whether a user would be spreading disinformation. One of their most efficient models, T-BERT, can forecast with 79.7% accuracy if a Twitter user will be reposting unreliable news.
Hence, the study showed that neural models are able to automatically discover relationships between a user’s generated textual content in the data and know if they will be likely to spread unreliable sources in the future.
The AI model can also detect differences in language use between the two groups of users. It was then found out that Twitter users who spread content from unreliable sources will talk mostly about politics or religion while people who share trustworthy sources will tweet about their personal lives. Another finding was that users who use impolite language and spread more unreliable content will have a higher online political hostility.
The researchers are hoping that their research will help stop the spread of disinformation.