Skip to content ↓ | Skip to navigation ↓

Researchers at Indiana University’s Bloomington School of Informatics and Computing have released a tool designed to detect if a Twitter account is being operated by an automated “bot” system or a real person in their continued effort to raise awareness about the potential for such accounts to be abused for misinformation campaigns.

The BotOrNot tool’s development was funded by U.S. Department of Defense and the National Science Foundation, and it analyzes thousands of variables related to a Twitter account’s network including the account’s content and temporal information in in real time then calculates the probability that the account may be controlled automated software.

“We have applied a statistical learning framework to analyze Twitter data, but the ‘secret sauce’ is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks,” said Alessandro Flammini, principal investigator on the project. “The demo that we’ve made available illustrates some of these features and how they contribute to the overall ‘bot or not’ score of a Twitter account.”

The military’s support for the project centers around concerns over how modern social media platforms combined with the proliferation of mobile information technology could negatively impact national security if leveraged to conduct large scale misinformation campaigns.

BotOrNot has a statistical accuracy of about 95%, and the researchers hope the tool will be useful in surveying the Twittersphere to determine how many accounts are actually being controlled by bots, and which ones may be malicious in nature.

“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” said Fil Menczer, direcor of IU’s Center for Complex Networks and Systems Research. “Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

The researchers said that the proliferation of social bots could be a threat to the democratic process, incite panic in the event of an emergency situation, influence the stock market, and be counterproductive for public policy. The two-year, $2 million project results were presented to the DoD last month.

Read More Here…