Project description:Basol et al. (2020) tested the "the Bad News Game" (BNG), an app designed to improve ability to spot false claims on social media. Participants rated simulated Tweets, then played either the BNG or an unrelated game, then re-rated the Tweets. Playing the BNG lowered rated belief in false Tweets. Here, four teams of undergraduate psychology students each attempted an extended replication of Basol et al., using updated versions of the original Bad News game. The most important extension was that the replications included a larger number of true Tweets than the original study and planned analyses of responses to true Tweets. The four replications were loosely coordinated, with each team independently working out how to implement the agreed plan. Despite many departures from the Basol et al. method, all four teams replicated their key finding: Playing the BNG reduced belief in false Tweets. But playing the BNG also reduced belief in true Tweets to the same or almost the same extent. Exploratory signal detection theory analyses indicated that the BNG increased response bias but did not improve discrimination. This converges with findings reported by Modirrousta-Galian and Higham (2023).
Project description:Recent research has explored the possibility of building attitudinal resistance against online misinformation through psychological inoculation. The inoculation metaphor relies on a medical analogy: by pre-emptively exposing people to weakened doses of misinformation cognitive immunity can be conferred. A recent example is the Bad News game, an online fake news game in which players learn about six common misinformation techniques. We present a replication and extension into the effectiveness of Bad News as an anti-misinformation intervention. We address three shortcomings identified in the original study: the lack of a control group, the relatively low number of test items, and the absence of attitudinal certainty measurements. Using a 2 (treatment vs. control) × 2 (pre vs. post) mixed design (N = 196) we measure participants' ability to spot misinformation techniques in 18 fake headlines before and after playing Bad News. We find that playing Bad News significantly improves people's ability to spot misinformation techniques compared to a gamified control group, and crucially, also increases people's level of confidence in their own judgments. Importantly, this confidence boost only occurred for those who updated their reliability assessments in the correct direction. This study offers further evidence for the effectiveness of psychological inoculation against not only specific instances of fake news, but the very strategies used in its production. Implications are discussed for inoculation theory and cognitive science research on fake news.
Project description:We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.
Project description:To extract finer-grained segment features from news and represent users accurately and exhaustively, this article develops a news recommendation (NR) model based on a sub-attention news encoder. First, by using convolutional neural network (CNN) and sub-attention mechanism, this model extracts a rich feature matrix from the news text. Then, from the perspective of image position and channel, the granular image data is retrieved. Next, the user's news browsing history is injected with a multi-head self-attention mechanism, and time series prediction is applied to the user's interests. Finally, the experimental results show that the proposed model performs well on the indicators: mean reciprocal rank (MRR), Normalized Discounted Cumulative Gain (NDCG) and area under the curve (AUC), with an average increase of 4.18%, 5.63% and 6.55%, respectively. The comparative results demonstrate that the model performs best on a variety of datasets and has fastest convergence speed in all cases. The proposed model may provide guidance for the design of the news recommendation system in the future.
Project description:Why do we share fake news? Despite a growing body of freely-available knowledge and information fake news has managed to spread more widely and deeply than before. This paper seeks to understand why this is the case. More specifically, using an experimental setting we aim to quantify the effect of veracity and perception on reaction likelihood. To examine the nature of this relationship, we set up an experiment that mimics the mechanics of Twitter, allowing us to observe the user perception, their reaction in the face of shown claims and the factual veracity of those claims. We find that perceived veracity significantly predicts how likely a user is to react, with higher perceived veracity leading to higher reaction rates. Additionally, we confirm that fake news is inherently more likely to be shared than other types of news. Lastly, we identify an activist-type behavior, meaning that belief in fake news is associated with significantly disproportionate spreading (compared to belief in true news).
Project description:Accuracy prompts, nudges that make accuracy salient, typically decrease the sharing of fake news, while having little effect on real news. Here, we introduce a new accuracy prompt that is more effective than previous prompts, because it does not only reduce fake news sharing, but it also increases real news sharing. We report four preregistered studies showing that an "endorsing accuracy" prompt ("I think this news is accurate"), placed into the sharing button, decreases fake news sharing, increases real news sharing, and keeps overall engagement constant. We also explore the mechanism through which the intervention works. The key results are specific to endorsing accuracy, rather than accuracy salience, and endorsing accuracy does not simply make participants apply a "source heuristic." Finally, we use Pennycook et al.'s limited-attention model to argue that endorsing accuracy may work by making people more carefully consider their sharing decisions.