How to fight misinformation with the support of machine learning tools?

Note: This was originally written by me & submitted to the Technical University of Munich as an entrance essay.

Motivation

The election of American president Donald J. Trump in 2016 propelled misinformation, often called “fake news”, into the mainstream discourse. The revelation that Russian troll farms and media outlets produced misinformation that supported Trump and undermined his opponents shocked American voters [1], [2]. This example demonstrates the the need for effective, preventive means to avoid misinformation.

This text considers misinformation, referred to as “fake news”, as false or inaccurate information, that is spread for various purposes, intentionally deceptive, and presented in the form of propaganda. Fake news can used as a rhetorical trick to achieve political goals, such as undermining foreign states or winning an election. For example, Russian intelligence used Twitter, Facebook [3], and other forms of media to influence the US election [4]. The “Pizzagate shooter” exemplifies the extreme consequences of misinformation: a man named Edgar M. Welch attempted to enter a non-existent pizzeria basement in search of a child molestation dungeon, that never existed [5]. However, detecting misinformation is a non-trivial task both manually and algorithmically [6]. Even highly qualified machine learning engineers at Facebook struggle with their platform’s misinformation issues [7]. Fortunately, misinformation is spread publicly, and therefore data to mine is readily available through public sources like Twitter. With this data, machine learning (ML) methods can be developed, applied, and tested to identify and thwart misinformation.

Problem Statement

The goal of this essay is to highlight existing machine learning approaches that tackle the problem of misinformation. Developing such tools is an important responsibility due to the far-reaching consequences of misinformation. It is difficult for people to tell the difference between true and false information [8], therefore we should apply machine learning tools to help. Since machines process data at a faster rate than humans, machine learning models are better able to detect nuanced patterns that expose deceptive media.

Approach

The approach used in this essay is to consult academic papers and web sites to find machine learning approaches that are promising and worth investigating because they contribute to solve the problem described above. The here proposed solutions are taken from academic papers and web sites. The results from the research papers must be compared taking into account the differences in their data sets and evaluation metrics. For example, many studies create their own data set, which makes it difficult to compare studies. Some studies, with common data sets, may only provide certain evaluation metrics like precision, but may leave out other useful metrics such as the F1 score.

Moreover, as a source I also consider the “Fake News Challenge” (FNC-1), which was created by academics and practitioners in response to the difficulties of misinformation detection [9].

Results & Discussion

Artificial neural networks (ANNs) have several advantages and disadvantages. The main advantages are as follows. 1) Generally speaking, ANNs improve with more data. 2) They are more flexible (e.g. can be used for classification and regression problems as well as unsupervised, semi-supervised, and supervised learning). 3) ANNs do not require as much feature-engineering as more traditional ML approaches 4) Lastly, inference is fast with a trained model. The following are disadvantages of ANNs. 1) It is a “black box” algorithm, meaning that it is difficult to understand how the model arrives at a particular output [10]. However, work has been done on this problem in the context of misinformation, such as that of O’Brien et al. [11], which was able to reveal words that were more commonly associated with fake news than real news. Additionally, many ANN configurations can require 2) large amounts of computational power to train and 3) large amounts of data to minimise the error.

A 2015 survey by Conroy [12] found two broad categories of methods: linguistic approaches, in which language patterns are analysed for deception, and network approaches, in which network information such as message metadata provide aggregate deception measures. There are a couple of common linguistic data sets for this research, which include LIAR by Wang et al. [13], based on data scraped from political fact-checking site Politifact, and the FNC-1 dataset. Common data sets make the comparison of approach performances easier. For this reason, I have excluded allegedly better-performing works that use other data sets.

The only metric shared by all works is accuracy. The accuracy of a model is calculated as the ratio of correct predictions over the total number of predictions. We have seen the performance, measured in terms of accuracy, of models on the Wang data set go from 44.87% [14] to 48.5%[15], and up to 73.8% in 2018 [16]. The FNC-1 advanced the state of the art rapidly, from a 79.53% accuracy baseline from the competition organisers, increasing to 85.2% [17], to 89% [18], and up to 94.31% [19] accuracy. Ultimately, the FNC-1 was won by Cisco Systems (Talos Group) [20] with an approach that averaged outputs of decision trees and a deep convolutional neural network.

The common thread in these well-performing models is the use of neural networks. According to Zeng [17], all neural network models surveyed outperform the hand-crafted feature-based systems like Support Vector Machines. In 2019, Cardoso [21] surveyed the state of the art and concluded that network analysis approaches are more successful.

Fighting misinformation can be achieved by deploying machine learning models strategically in various tools. This can be done to inform readers about the credibility of information sources. Knowing whether a given piece of media is truthful or not can impact how a reader interprets a piece of media. For example, a browser extension could be developed that informs a user whether or not an article or piece of media is truthful or deceptive. It could embed the inferences of a neural network into every page the user visits. Another application could be implemented by social media platforms such as Facebook to filter out ads considered deceptive and augment media posted by users with a truthfulness indicator. There are certainly many more ways machine learning tools can be used to fight misinformation.

Conclusions

Based on the surveyed literature, I conclude that misinformation can be detected on a “good enough” basis. I argue that linguistic approaches using neural networks are the most effective tools currently available. Yet, these algorithms are not perfect, and suffer from problems such as the black box problem. Other approaches I evaluated had significant problems, for example network-based approaches suffer since they can only be applied reactively after a piece of misinformation has spread and the data has been collected. Therefore, I propose to further investigate linguistic approaches as a proactive tool. In conclusion, misinformation can successfully be fought with the support of machine learning tools by deploying neural networks trained to detect it. The state of the art in detecting misinformation (on a particular data set) is at an accuracy of 94.31% [19], which makes it more than “good enough” for possible applications. Considering the advantages and disadvantages, I think that neural networks represent a promising tool to fight misinformation in our time.

References

[1] J. Downie, “What really disturbs voters about russia’s election interference,” The Washington Post. WP Company, Jul-2018 [Online]. Available: https://www.washingtonpost.com/blogs/post-partisan/wp/2018/07/22/what-really-disturbs-voters-about-russias-election-interference/. [Accessed: 23-Apr-2019]

[2] Greene, “Senate report: Moscow directed ’troll farm’ in efforts to elect trump,” The Next Web. Dec-2018 [Online]. Available: https://thenextweb.com/politics/2018/12/17/senate-report-moscow-directed-troll-farm-in-efforts-to-elect-trump/. [Accessed: 23-Apr-2019]

[3] C. W. Andrew Weisburd, “How russia dominates your twitter feed to promote lies (and, trump, too),” The Daily Beast. The Daily Beast Company, Aug-2016 [Online]. Available: https://www.thedailybeast.com/how-russia-dominates-your-twitter-feed-to-promote-lies-and-trump-too. [Accessed: 23-Apr-2019]

[4] A. Watkins, “Intel officials believe russia spreads fake news,” BuzzFeed News. BuzzFeed News, Nov-2016 [Online]. Available: https://www.buzzfeednews.com/article/alimwatkins/intel-officials-believe-russia-spreads-fake-news. [Accessed: 23-Apr-2019]

[5] C. K. Goldman and Adam, “In washington pizzeria attack, fake news brought real guns,” The New York Times. The New York Times, Dec-2016 [Online]. Available: https://www.nytimes.com/2016/12/05/business/media/comet-ping-pong-pizza-shooting-fake-news-consequences.html. [Accessed: 23-Apr-2019]

[6] D. Oberhaus, “Teaching machines to detect fake news is really hard,” Motherboard. VICE, May-2017 [Online]. Available: https://motherboard.vice.com/en_us/article/9aebw7/teaching-machines-to-detect-fake-news-is-really-hard. [Accessed: 23-Apr-2019]

[7] C. Silverman, “In spite of its efforts, facebook is still the home of hugely viral fake news,” BuzzFeed News. BuzzFeed News, Dec-2018 [Online]. Available: https://www.buzzfeednews.com/article/craigsilverman/facebook-fake-news-hits-2018. [Accessed: 23-Apr-2019]

[8] C. Domonoske, “Students have ’dismaying’ inability to tell fake news from real, study finds,” NPR. NPR, Nov-2016 [Online]. Available: https://www.npr.org/sections/thetwo-way/2016/11/23/503129818/study-finds-students-have-dismaying-inability-to-tell-fake-news-from-real

[9] “Fake news challenge stage 1 (fnc-i): Stance detection,” Fake News Challenge. [Online]. Available: http://www.fakenewschallenge.org/. [Accessed: 23-Apr-2019]

[10] R. Matheson and M. N. Office, “Peering under the hood of fake-news detectors,” MIT News. Feb-2019 [Online]. Available: http://news.mit.edu/2019/opening-machine-learning-black-box-fake-news-0206. [Accessed: 23-Apr-2019]

[11] G. E. Nicole O’Brien Sophia Latessa and X. Boix, “The language of fake news: Opening the black-box of deep learning based detectors.” Center for Brains, Minds; Machines (CBMM), Montreal, Canada, Nov-2018 [Online]. Available: http://hdl.handle.net/1721.1/120056. [Accessed: 23-Apr-2019]

[12] N. J. Conroy, V. L. Rubin, and Y. Chen, “Automatic deception detection: Methods for finding fake news,” Proceedings of the Association for Information Science and Technology, vol. 52, no. 1, pp. 1–4, 2015.

[13] W. Y. Wang, “”Liar, liar pants on fire”: A new benchmark dataset for fake news detection,” arXiv preprint arXiv:1705.006-48, 2017.

[14] A. Roy, K. Basak, A. Ekbal, and P. Bhattacharyya, “A deep ensemble framework for fake news detection and classification,” arXiv preprint arXiv:1811.04670, 2018.

[15] F. C. Fernández-Reyes and S. Shinde, “Evaluating deep neural networks for automatic fake news detection in political domain,” in Proceedings of the ibero-american conference on artificial intelligence, 2018, pp. 206–216.

[16] P. T. Tin, “A study on deep learning for fake news detection,” Master’s thesis, Japan Advanced Institute of Science; Technology Information Science, Nomi, Ishikawa, Japan, 2018.

[17] Q. Zeng, Q. Zhou, and S. Xu, “Neural stance detectors for fake news challenge.”.

[18] R. Davis and C. Proctor, “Fake news, real consequences: Recruiting neural networks for the fight against fake news.”.

[19] A. Thota, P. Tilak, S. Ahluwalia, and N. Lohia, “Fake news detection: A deep learning approach,” SMU Data Science Review, vol. 1, no. 3, p. 10, 2018.

[20] S. Baird, “Talos targets disinformation with fake news challenge victory,” Talos Blog || Cisco Talos Intelligence Group - Comprehensive Threat Intelligence: Talos Targets Disinformation with Fake News Challenge Victory. Jun-2017 [Online]. Available: https://blog.talosintelligence.com/2017/06/talos-fake-news-challenge.html. [Accessed: 23-Apr-2019]

[21] F. Cardoso Durier da Silva, R. Vieira, and A. C. Garcia, “Can machines learn to detect fake news? A survey focused on social media,” in Proceedings of the 52nd hawaii international conference on system sciences, 2019.

Last modified 2019.12.06