The significance of COVID-19-associated myocardial injury: how overinterpretation of scientific findings can fuel media sensationalism and spread misinformation.
The significance of COVID-19-associated myocardial injury: how overinterpretation of scientific findings can fuel media sensationalism and spread misinformation.
Project description:Contemporary commentators describe the current period as "an era of fake news" in which misinformation, generated intentionally or unintentionally, spreads rapidly. Although affecting all areas of life, it poses particular problems in the health arena, where it can delay or prevent effective care, in some cases threatening the lives of individuals. While examples of the rapid spread of misinformation date back to the earliest days of scientific medicine, the internet, by allowing instantaneous communication and powerful amplification has brought about a quantum change. In democracies where ideas compete in the marketplace for attention, accurate scientific information, which may be difficult to comprehend and even dull, is easily crowded out by sensationalized news. In order to uncover the current evidence and better understand the mechanism of misinformation spread, we report a systematic review of the nature and potential drivers of health-related misinformation. We searched PubMed, Cochrane, Web of Science, Scopus and Google databases to identify relevant methodological and empirical articles published between 2012 and 2018. A total of 57 articles were included for full-text analysis. Overall, we observe an increasing trend in published articles on health-related misinformation and the role of social media in its propagation. The most extensively studied topics involving misinformation relate to vaccination, Ebola and Zika Virus, although others, such as nutrition, cancer, fluoridation of water and smoking also featured. Studies adopted theoretical frameworks from psychology and network science, while co-citation analysis revealed potential for greater collaboration across fields. Most studies employed content analysis, social network analysis or experiments, drawing on disparate disciplinary paradigms. Future research should examine susceptibility of different sociodemographic groups to misinformation and understand the role of belief systems on the intention to spread misinformation. Further interdisciplinary research is also warranted to identify effective and tailored interventions to counter the spread of health-related misinformation online.
Project description:BackgroundSocial media has been extensively used for the communication of health-related information and consecutively for the potential spread of medical misinformation. Conventional systematic reviews have been published on this topic to identify original articles and to summarize their methodological approaches and themes. A bibliometric study could complement their findings, for instance, by evaluating the geographical distribution of the publications and determining if they were well cited and disseminated in high-impact journals.ObjectiveThe aim of this study was to perform a bibliometric analysis of the current literature to discover the prevalent trends and topics related to medical misinformation on social media.MethodsThe Web of Science Core Collection electronic database was accessed to identify relevant papers with the following search string: ALL=(misinformati* OR "wrong informati*" OR disinformati* OR "misleading informati*" OR "fake news*") AND ALL=(medic* OR illness* OR disease* OR health* OR pharma* OR drug* OR therap*) AND ALL=("social media*" OR Facebook* OR Twitter* OR Instagram* OR YouTube* OR Weibo* OR Whatsapp* OR Reddit* OR TikTok* OR WeChat*). Full records were exported to a bibliometric software, VOSviewer, to link bibliographic information with citation data. Term and keyword maps were created to illustrate recurring terms and keywords.ResultsBased on an analysis of 529 papers on medical and health-related misinformation on social media, we found that the most popularly investigated social media platforms were Twitter (n=90), YouTube (n=67), and Facebook (n=57). Articles targeting these 3 platforms had higher citations per paper (>13.7) than articles covering other social media platforms (Instagram, Weibo, WhatsApp, Reddit, and WeChat; citations per paper <8.7). Moreover, social media platform-specific papers accounted for 44.1% (233/529) of all identified publications. Investigations on these platforms had different foci. Twitter-based research explored cyberchondria and hypochondriasis, YouTube-based research explored tobacco smoking, and Facebook-based research studied vaccine hesitancy related to autism. COVID-19 was a common topic investigated across all platforms. Overall, the United States contributed to half of all identified papers, and 80% of the top 10 most productive institutions were based in this country. The identified papers were mostly published in journals of the categories public environmental and occupational health, communication, health care sciences services, medical informatics, and medicine general internal, with the top journal being the Journal of Medical Internet Research.ConclusionsThere is a significant platform-specific topic preference for social media investigations on medical misinformation. With a large population of internet users from China, it may be reasonably expected that Weibo, WeChat, and TikTok (and its Chinese version Douyin) would be more investigated in future studies. Currently, these platforms present research gaps that leave their usage and information dissemination warranting further evaluation. Future studies should also include social platforms targeting non-English users to provide a wider global perspective.
Project description:The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social 'carrots' (e.g., 'likes') and 'sticks' (e.g., 'dislikes') that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement.
Project description:BackgroundDuring global health crises such as the COVID-19 pandemic, rapid spread of misinformation on social media has occurred. The misinformation associated with COVID-19 has been analyzed, but little attention has been paid to developing a comprehensive analytical framework to study its spread on social media.ObjectiveWe propose an elaboration likelihood model-based theoretical model to understand the persuasion process of COVID-19-related misinformation on social media.MethodsThe proposed model incorporates the central route feature (content feature) and peripheral features (including creator authority, social proof, and emotion). The central-level COVID-19-related misinformation feature includes five topics: medical information, social issues and people's livelihoods, government response, epidemic spread, and international issues. First, we created a data set of COVID-19 pandemic-related misinformation based on fact-checking sources and a data set of posts that contained this misinformation on real-world social media. Based on the collected posts, we analyzed the dissemination patterns.ResultsOur data set included 11,450 misinformation posts, with medical misinformation as the largest category (n=5359, 46.80%). Moreover, the results suggest that both the least (4660/11,301, 41.24%) and most (2320/11,301, 20.53%) active users are prone to sharing misinformation. Further, posts related to international topics that have the greatest chance of producing a profound and lasting impact on social media exhibited the highest distribution depth (maximum depth=14) and width (maximum width=2355). Additionally, 97.00% (2364/2437) of the spread was characterized by radiation dissemination.ConclusionsOur proposed model and findings could help to combat the spread of misinformation by detecting suspicious users and identifying propagation characteristics.
Project description:The spread of scientific misinformation is not new but rather has long posed threats to human health, environmental well-being, and the creation of a sustainable and equitable future. However, with the COVID-19 pandemic, the need to develop strategies to counteract scientific misinformation has taken on an acute urgency. Cell editor Nicole Neuman sat down with Walter Quattrociocchi and Dietram Scheufele to gain insights on how we got here and what does-and does not-work to fight the spread of scientific misinformation. Excerpts from this conversation, edited for clarity and length, are presented below, and the full conversation is available with the article online.
Project description:BackgroundFueled by misinformation, fentanyl panic has harmed public health through complicating overdose rescue while rationalizing hyper-punitive criminal laws, wasteful expenditures, and proposals to curtail vital access to pain pharmacotherapy. To assess misinformation about health risk from casual contact with fentanyl, we characterize its diffusion and excess visibility in mainstream and social media.MethodsWe used Media Cloud to compile and characterize mainstream and social media content published between January 2015 and September 2019 on overdose risk from casual fentanyl exposure.ResultsRelevant content appeared in 551 news articles spanning 48 states. Misinformed media reports received approximately 450,000 Facebook shares, potentially reaching nearly 70,000,000 users from 2015-2019. Amplified by erroneous government statements, misinformation received excess social media visibility by a factor of 15 compared to corrective content, which garnered fewer than 30,000 shares with potential reach of 4,600,000 Facebook users.ConclusionHealth-related misinformation continues to proliferate online, hampering responses to public health crises. More evidence-informed tools are needed to effectively challenge misinformed narratives in mainstream and social media.
Project description:ObjectivesIn biomedical research, spin is the overinterpretation of findings, and it is a growing concern. To date, the presence of spin has not been evaluated in prognostic model research in oncology, including studies developing and validating models for individualized risk prediction.Study design and settingWe conducted a systematic review, searching MEDLINE and EMBASE for oncology-related studies that developed and validated a prognostic model using machine learning published between 1st January, 2019, and 5th September, 2019. We used existing spin frameworks and described areas of highly suggestive spin practices.ResultsWe included 62 publications (including 152 developed models; 37 validated models). Reporting was inconsistent between methods and the results in 27% of studies due to additional analysis and selective reporting. Thirty-two studies (out of 36 applicable studies) reported comparisons between developed models in their discussion and predominantly used discrimination measures to support their claims (78%). Thirty-five studies (56%) used an overly strong or leading word in their title, abstract, results, discussion, or conclusion.ConclusionThe potential for spin needs to be considered when reading, interpreting, and using studies that developed and validated prognostic models in oncology. Researchers should carefully report their prognostic model research using words that reflect their actual results and strength of evidence.
Project description:Misinformation online poses a range of threats, from subverting democratic processes to undermining public health measures. Proposed solutions range from encouraging more selective sharing by individuals to removing false content and accounts that create or promote it. Here we provide a framework to evaluate interventions aimed at reducing viral misinformation online both in isolation and when used in combination. We begin by deriving a generative model of viral misinformation spread, inspired by research on infectious disease. By applying this model to a large corpus (10.5 million tweets) of misinformation events that occurred during the 2020 US election, we reveal that commonly proposed interventions are unlikely to be effective in isolation. However, our framework demonstrates that a combined approach can achieve a substantial reduction in the prevalence of misinformation. Our results highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity and democratic processes around the globe.
Project description:Understanding the mechanisms by which information and misinformation spread through groups of individual actors is essential to the prediction of phenomena ranging from coordinated group behaviors to misinformation epidemics. Transmission of information through groups depends on the rules that individuals use to transform the perceived actions of others into their own behaviors. Because it is often not possible to directly infer decision-making strategies in situ, most studies of behavioral spread assume that individuals make decisions by pooling or averaging the actions or behavioral states of neighbors. However, whether individuals may instead adopt more sophisticated strategies that exploit socially transmitted information, while remaining robust to misinformation, is unknown. Here, we study the relationship between individual decision-making and misinformation spread in groups of wild coral reef fish, where misinformation occurs in the form of false alarms that can spread contagiously through groups. Using automated visual field reconstruction of wild animals, we infer the precise sequences of socially transmitted visual stimuli perceived by individuals during decision-making. Our analysis reveals a feature of decision-making essential for controlling misinformation spread: dynamic adjustments in sensitivity to socially transmitted cues. This form of dynamic gain control can be achieved by a simple and biologically widespread decision-making circuit, and it renders individual behavior robust to natural fluctuations in misinformation exposure.