Search results

1 – 10 of over 3000
Article
Publication date: 19 October 2022

Weirui Wang and Susan Jacobson

Health misinformation poses severe risks to people’s health decisions and outcomes. A great deal of research in this area has focused on debunking misinformation and found limited…

Abstract

Purpose

Health misinformation poses severe risks to people’s health decisions and outcomes. A great deal of research in this area has focused on debunking misinformation and found limited effects of correctives after misinformation exposure. The research on prebunking strategies has been inadequate. Most has focused on forewarning and enhancing literacy skills and knowledge to recognize misinformation. Part of the reason for the inadequacy could be due to the challenges in conceptualizing and measuring knowledge. This study intends to fill this gap and examines various types of knowledge, including subjective knowledge, cancer literacy, persuasion knowledge and media literacy. This study aims to understand how knowledge may moderate the effect of misinformation exposure on misbeliefs.

Design/methodology/approach

An online experiment with a basic experimental design (misinformation exposure: health misinformation vs factual health message) was conducted. The authors measured and tested the moderating role of different types of knowledge (subjective knowledge, cancer literacy, persuasion knowledge and media literacy) separately to improve the understanding of their role in combatting online health misinformation.

Findings

This study found that a higher level of cancer literacy and persuasion knowledge helped people identify misinformation and prevented them from being persuaded by it. A higher level of subjective knowledge, however, reduced the recognition of misinformation, thereby increasing the likelihood of being persuaded by it. Media literacy did not moderate the mediation path.

Originality/value

This study differentiates the role different types of knowledge may have played in moderating the influence of health misinformation. It contributes to a strategic development of interventions that better prepare people against the influence of health misinformation.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 29 February 2024

Donghee Shin, Kulsawasd Jitkajornwanich, Joon Soo Lim and Anastasia Spyridou

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a…

Abstract

Purpose

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.

Design/methodology/approach

We proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.

Findings

The effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.

Research limitations/implications

When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.

Practical implications

Perceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.

Social implications

Although there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.

Originality/value

The mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 2 December 2020

Yang Cheng and Yunjuan Luo

Informed by the third-person effects (TPE) theory, this study aims to analyze restrictive versus corrective actions in response to the perceived TPE of misinformation on social…

1258

Abstract

Purpose

Informed by the third-person effects (TPE) theory, this study aims to analyze restrictive versus corrective actions in response to the perceived TPE of misinformation on social media in the USA.

Design/methodology/approach

The authors conducted an online survey among 1,793 adults in the USA in early April. All participants were randomly enrolled in this research through a professional survey company. The structural equation modeling via Amos 20 was adopted for hypothesis testing.

Findings

Results indicated that individuals also perceived that others were more influenced by misinformation about COVID-19 than they were. Further, such a perceptual gap was associated with public support for governmental restrictions and corrective action. Negative affections toward health misinformation directly affected public support for governmental restrictions rather than corrective action. Support for governmental restrictions could further facilitate corrective action.

Originality/value

This study examined the applicability of TPE theory in the context of digital health misinformation during a unique global crisis. It explored the significant role of negative affections in influencing restrictive and corrective actions. Practically, this study offered implications for information and communication educators and practitioners.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2020-0386

Article
Publication date: 14 January 2022

Liang Chen and Lunrui Fu

Drawing on the third-person effect (TPE) theory and the theory of planned behavior (TPB) as a theoretical framework, the current study aims to explore the cognitive mechanisms…

1755

Abstract

Purpose

Drawing on the third-person effect (TPE) theory and the theory of planned behavior (TPB) as a theoretical framework, the current study aims to explore the cognitive mechanisms behind how third-person perception (TPP) of misinformation about public health emergencies affects intention to engage in corrective actions via attitude, subjective norms and perceived behavioral control.

Design/methodology/approach

A total of 1,063 participants in China were recruited via a professional survey company (Sojump) to complete an online national survey during the outbreak of coronavirus (COVID-19) in China. Structural equation modeling using Mplus 7.0 was used to address the research hypotheses.

Findings

The results reveal that attention to online information about public health emergencies significantly predicted TPP. In addition, TPP positively influenced attitude and perceived behavioral control, which, in turn, positively encouraged individuals to take corrective actions to debunk online misinformation. However, TPP did not significantly influence subjective norms. A potential explanation is provided in the discussion section.

Research limitations/implications

The research extends the TPE theory by providing empirical evidence for corrective actions and uncovers the underlying cognitive mechanism behind the TPE by exploring key variables of the TPB as mediating constructs. These are all significant theoretical contributions to the TPE and offer practical contributions to combating online misinformation.

Originality/value

The research extends the TPE theory by providing empirical evidence for a novel behavioral outcome (i.e. corrective actions in response to misinformation) and uncovers the cognitive mechanism underlying the TPE by exploring key variables of the TPB as mediating constructs. These are all significant theoretical contributions to the TPE and offer practical contributions to combating online misinformation.

Details

Internet Research, vol. 32 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 26 July 2023

Yulong Tang, Chen Luo and Yan Su

The ballooning health misinformation on social media raises grave concerns. Drawing upon the S-O-R (Stimulus-Organism-Response) model and the information processing literature…

Abstract

Purpose

The ballooning health misinformation on social media raises grave concerns. Drawing upon the S-O-R (Stimulus-Organism-Response) model and the information processing literature, this study aims to explore (1) how social media health information seeking (S) affects health misinformation sharing intention (R) through the channel of health misperceptions (O) and (2) whether the mediation process would be contingent upon different information processing predispositions.

Design/methodology/approach

Data were collected from a survey comprising 388 respondents from the Chinese middle-aged or above group, one of China's most susceptible populations to health misinformation. Standard multiple linear regression models and the PROCESS Macro were adopted to examine the direct effect and the moderated mediation model.

Findings

Results bolstered the S-O-R-based mechanism, in which health misperceptions mediated social media health information seeking's effect on health misinformation sharing intention. As an indicator of analytical information processing, need for cognition (NFC) failed to moderate the mediation process. Contrarily, faith in intuition (FI), an indicator reflecting intuitive information processing, served as a significant moderator. The positive association between social media health information seeking and misperceptions was stronger among respondents with low FI.

Originality/value

This study sheds light on health misinformation sharing research by bridging health information seeking, information internalization and information sharing. Moreover, the authors extended the S-O-R model by integrating information processing predispositions, which differs this study from previous literature and advances the extant understanding of how information processing styles work in the face of online health misinformation. The particular age group and the Chinese context further inform context-specific implications regarding online health misinformation regulation.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0157.

Details

Online Information Review, vol. 48 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 2 May 2023

Chen Luo, Yijia Zhu and Anfan Chen

Drawing upon the third-person effect (TPE) theory, this study focuses on two types of misinformation countering intentions (i.e. simple correction and correction with…

Abstract

Purpose

Drawing upon the third-person effect (TPE) theory, this study focuses on two types of misinformation countering intentions (i.e. simple correction and correction with justification). Accordingly, it aims to (1) assess the tenability of the third-person perception (TPP) in the face of misinformation on social media, (2) explore the antecedents of TPP and its relationship with individual-level misinformation countering intentions and (3) examine whether the mediating process is contingent on different social media usage conditions.

Design/methodology/approach

An online survey was conducted with 1,000 representative respondents recruited in Mainland China in January 2022 using quota sampling. Paired t-test, multiple linear regression and moderated mediation analysis were employed to examine the proposed hypotheses.

Findings

Results bolster the fundamental proposition of TPP that individuals perceive others as more susceptible to social media misinformation than they are. The self-other perceptual bias served as a mediator between the perceived consequence of misinformation and misinformation countering (i.e. simple correction and correction with justification) intentions. Furthermore, intensive social media users were likely to be motivated to counter social media misinformation derived from the indirect mechanism.

Originality/value

The findings provide further evidence for the role of TPE in explaining misinformation countering intention as prosocial and altruistic behavior rather than self-serving behavior. Practically, promising ways to combat rampant misinformation on social media include promoting the prosocial aspects and beneficial outcomes of misinformation countering efforts to others, as well as reconfiguring the strategies by impelling intensive social media users to participate in enacting countering actions

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-09-2022-0507.

Details

Online Information Review, vol. 48 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 15 August 2023

Myojung Chung

While there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge…

Abstract

Purpose

While there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge serves as a countermeasure to problematic information flow. To fill this gap, this study aims to investigate how algorithmic knowledge predicts people's attitudes and behaviors regarding misinformation through the lens of the third-person effect.

Design/methodology/approach

Four national surveys in the USA (N = 1,415), the UK (N = 1,435), South Korea (N = 1,798) and Mexico (N = 784) were conducted between April and September 2021. The survey questionnaire measured algorithmic knowledge, perceived influence of misinformation on self and others, intention to take corrective actions, support for government regulation and content moderation. Collected data were analyzed using multigroup SEM.

Findings

Results indicate that algorithmic knowledge was associated with presumed influence of misinformation on self and others to different degrees. Presumed media influence on self was a strong predictor of intention to take actions to correct misinformation, while presumed media influence on others was a strong predictor of support for government-led platform regulation and platform-led content moderation. There were nuanced but noteworthy differences in the link between presumed media influence and behavioral responses across the four countries studied.

Originality/value

These findings are relevant for grasping the role of algorithmic knowledge in countering rampant misinformation on social media, as well as for expanding US-centered extant literature by elucidating the distinctive views regarding social media algorithms and misinformation in four countries.

Details

Internet Research, vol. 33 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 12 July 2013

Lauren A. Monds, Helen M. Paterson and Keenan Whittle

Operational debriefing and psychological debriefing both involve groups of participants (typically from the emergency services) discussing a critical incident. Research on…

498

Abstract

Purpose

Operational debriefing and psychological debriefing both involve groups of participants (typically from the emergency services) discussing a critical incident. Research on post‐incident debriefing has previously raised concerns over the likelihood that this discussion may affect not only psychological responses, but also memory integrity. It is possible that discussion in this setting could increase susceptibility to the misinformation effect. This paper seeks to address these issues.

Design/methodology/approach

The aim of this study was to investigate whether including a warning to the debriefing instructions about the possibility of memory contamination could reduce the misinformation effect. Participants viewed a stressful film, and were assigned to one of three conditions: debriefing with standard instructions, debriefing with a memory warning, or an individual recall control condition. Free recall memory and distress for the film were assessed.

Findings

Results indicate that participants in both debriefing conditions reported significantly more misinformation than those who did not participate in a discussion. Additionally it was found that the warning of memory contamination did not diminish the misinformation effect.

Originality/value

These findings are discussed with suggestions for the future of debriefing, with a particular focus on the emergency services.

Details

International Journal of Emergency Services, vol. 2 no. 1
Type: Research Article
ISSN: 2047-0894

Keywords

Article
Publication date: 5 December 2023

Porismita Borah and Kyle John Lorenzano

Purpose: The main purpose of the study is to understand the factors that facilitate correction behavior among individuals. In this study the authors examine the impact of…

Abstract

Purpose

Purpose: The main purpose of the study is to understand the factors that facilitate correction behavior among individuals. In this study the authors examine the impact of self-perceived media literacy (SPML) and reflection on participants’ correction behavior.

Design/methodology/approach

Methods: Data for the study were collected from Amazon's MTurk using an online survey. Data were collected after a certificate of exemption was received by the Institutional Review Board in a research university in the United States (US) Qualtrics software was used to collect data. The total number of participants was 797.

Findings

Findings: The findings show that although both SPML and reflection are positively associated with rumor refutation, higher SPML alone is not enough. Reflective judgment is critical for individuals to take part in this behavior online, such that individuals with higher reflective judgment indicated that they refute rumors online, irrespective of their SPML score.

Originality/value

Originality: The authors tested the relationship of multiple variables with participants correction behavior. Although research shows the importance of social correction, there is not much knowledge about what facilitates actual misinformation correction.

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 2 November 2022

Porismita Borah, Sojung Kim and Ying-Chia (Louise) Hsu

One of the most prolific areas of misinformation research is examining corrective strategies in messaging. The main purposes of the current study are to examine the effects of (1…

Abstract

Purpose

One of the most prolific areas of misinformation research is examining corrective strategies in messaging. The main purposes of the current study are to examine the effects of (1) partisan media (2) credibility perceptions and emotional reactions and (3) theory driven corrective messages on people's misperceptions about COVID-19 mask wearing behaviors.

Design/methodology/approach

The authors used a randomized experimental design to test the hypotheses. The data were collected via the survey firm Lucid. The number of participants was 485. The study was conducted using Qualtrics after the research project was exempt by the Institutional Research Board of a large University in the US. The authors conducted an online experiment with four conditions, narrative versus statistics and individual versus collective. The manipulation messages were constructed as screenshots from Facebook.

Findings

The findings of this study show that higher exposure to liberal media was associated with lower misperceptions, whereas higher credibility perceptions of and positive reactions toward the misinformation post and negative emotions toward the correction comment were associated with higher misperceptions. Moreover, the findings showed that participants in the narrative and collective-frame condition had the lowest misperceptions.

Originality/value

The authors tested theory driven misinformation corrective messages to understand the impact of these messages and multiple related variables on misperceptions about COVID-19 mask wearing. This study contributes to the existing misinformation correction literature by investigating the explanatory power of the two well-established media effects theories on misinformation correction messaging and by identifying essential individual characteristics that should be considered when evaluating how misperceptions about the COVID-19 crisis works and gets reduced.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-11-2021-0600

Details

Online Information Review, vol. 47 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 3000