Apparently, current machine learning models aren't yet up to the task of distinguishing false news reports and, like the US government, tend to believe what every Russian troll tell them.
Some experts hoped that the same machine-learning-based systems could be trained to detect fake stories. But MIT doctoral student Tal Schuster's studies show that, while machines are great at detecting machine-generated text, they can't identify whether stories are true or false.
Many automated fact-checking systems are trained using a database of true statements called Fact Extraction and Verification (FEVER).
In one study, Schuster and team showed that machine learning-taught fact-checking systems struggled to handle negative statements ("Greg never said his car wasn't blue") even when they would know the positive statement was true ("Greg says his car is blue").
The problem, say the researchers, is that the database is filled with human bias. The people who created FEVER tended to write their false entries as negative statements and their true statements as positive statements -- so the computers learned to rate sentences with negative statements as false. That means the systems were solving a much easier problem than detecting fake news.