To read this content please select one of the options below:

Debiasing misinformation: how do people diagnose health recommendations from AI?

Donghee Shin (College of Media and Communication, Texas Tech University, Lubbock, Texas, USA)
Kulsawasd Jitkajornwanich (Department of Professional Communication, College of Media and Communication, Texas Tech University, Lubbock, Texas, USA)
Joon Soo Lim (Newhouse School for Public Communications, Syracuse University, Syracuse, New York, USA)
Anastasia Spyridou (College of Business, Zayed University, Abu Dhabi, United Arab Emirates)

Online Information Review

ISSN: 1468-4527

Article publication date: 29 February 2024

Issue publication date: 8 August 2024

544

Abstract

Purpose

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.

Design/methodology/approach

We proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.

Findings

The effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.

Research limitations/implications

When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.

Practical implications

Perceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.

Social implications

Although there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.

Originality/value

The mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167

Keywords

Citation

Shin, D., Jitkajornwanich, K., Lim, J.S. and Spyridou, A. (2024), "Debiasing misinformation: how do people diagnose health recommendations from AI?", Online Information Review, Vol. 48 No. 5, pp. 1025-1044. https://doi.org/10.1108/OIR-04-2023-0167

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Emerald Publishing Limited

Related articles