Bots don't spread fake news on Twitter, people do, say MIT researchers

Adjust Comment Print

A new study by three MIT scholars has found that false news spreads more rapidly on the social network Twitter than real news does - and by a substantial margin. Additionally, fake news was more likely to provoke fear, disgust and surprise in reader reactions, whereas true stories triggered reactions more closely aligned with joy and sadness. They were led by Sinan Aral, of the Massachusetts Institute of Technology (MIT), in Cambridge, Mass. His ongoing, not-yet-published research has found that about 80 percent of false stories come from just one-tenth of 1 percent of users.

Not all false news is created equal.

In that vein, Aral says, "science needs to have more support, both from industry and government, in order to do more studies".

The stories were classified as true or false based on evaulations made by six independent fact-checking organizations. Fake stories were shared more widely and swiftly than the truth, with false information 70 per cent more likely to be retweeted. Twitter, however, remained a breeding pool for false information.

"It's very challenging to study how this actually affects people", Menczer said. That's why Twitter users can't help but share half-truths and unchecked rumors over the platform, say a trio of MIT researchers who conducted the study.

At the end of February, Twitter issued new rules aimed at limiting the influence of bots on the social network.

Overall, fake news stories prompted cascades that went farther, faster, deeper, and more broadly across Twitter than any of those rooted in truth. "Bots can't explain this difference between the spread of false news and the spread of true news, this massive difference in how far and fast and broadly it spreads".

True news, on the other hand, hardly ever reached more than 1,000 people. It also takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people. Connie Guglielmo, Editor-in-Chief of CNET, joined ABC7 News Thursday to discuss a 10-year study that tells us a lot about human nature and the spreading of fake news online. Instead, humans seemed to be the driving force behind false stories' popularity. The study authors hypothesized that falsehoods contain more novelty than truth.

So far, Twitter has mostly resisted being "the arbiters of truth", said Nick Pickles, Twitter's head of public policy for the United Kingdom.

If a fishy-sounding tweet happens to align with what you already believe, you will be less likely to question it, Menczer noted.

Society often reinforces an uncritical view of information, says David Lazer, a professor of political science and computer and information science at Northeastern University, who was not involved in the new research but co-wrote a commentary that ran alongside it.

"Let's not take it as our destiny", said Deb Roy, another of the researchers, "that we have entered into the post-truth world from which we will not emerge".

Twitter provided the data for the study and funded the work.

"The authors are very honest with the interpretation of their results: They can not claim any causality between novelty and endorsement, but they provide convincing evidence that novelty plays an important role in spreading fake information", said Manlio De Domenico, a scientist at the Bruno Kessler Foundation's Center for Information Technology in Italy who tracked how the Higgs boson rumor spread on Twitter.

Do you think you can spot fake news? Twitter, for example, announced that it blocked some accounts linked to Russian misinformation, and alerted users exposed to the accounts that they might have been "duped". "This is similar: making information available to people so they can decide what to share and what not to share".