Scientific evidence and specific context: leveraging large language models for health fact-checking
ISSN: 1468-4527
Article publication date: 26 September 2024
Issue publication date: 26 November 2024
Abstract
Purpose
This study aims to evaluate the performance of LLMs with various prompt engineering strategies in the context of health fact-checking.
Design/methodology/approach
Inspired by Dual Process Theory, we introduce two kinds of prompts: Conclusion-first (System 1) and Explanation-first (System 2), and their respective retrieval-augmented variations. We evaluate the performance of these prompts across accuracy, argument elements, common errors and cost-effectiveness. Our study, conducted on two public health fact-checking datasets, categorized 10,212 claims as knowledge, anecdotes and news. To further analyze the reasoning process of LLM, we delve into the argument elements of health fact-checking generated by different prompts, revealing their tendencies in using evidence and contextual qualifiers. We conducted content analysis to identify and compare the common errors across various prompts.
Findings
Results indicate that the Conclusion-first prompt performs well in knowledge (89.70%,66.09%), anecdote (79.49%,79.99%) and news (85.61%,85.95%) claims even without retrieval augmentation, proving to be cost-effective. In contrast, the Explanation-first prompt often classifies claims as unknown. However, it significantly boosts accuracy for news claims (87.53%,88.60%) and anecdote claims (87.28%,90.62%) with retrieval augmentation. The Explanation-first prompt is more focused on context specificity and user intent understanding during health fact-checking, showing high potential with retrieval augmentation. Additionally, retrieval-augmented LLMs concentrate more on evidence and context, highlighting the importance of the relevance and safety of retrieved content.
Originality/value
This study offers insights into how a balanced integration could enhance the overall performance of LLMs in critical applications, paving the way for future research on optimizing LLMs for complex cognitive tasks.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-02-2024-0111
Keywords
Acknowledgements
Thanks to everyone in LIMICS for their kind support. Funding was provided by the 73rd General Program of China Postdoctoral Science Foundation (Grant No. 2023M734062), the 2023 Youth Foundation for Humanities and Social Sciences of the Ministry of Education of the People’s Republic of China (Grant No. 23YJC870013) and the National Natural Science Foundation of China (Grant No. 72404288).
Citation
Ni, Z., Qian, Y., Chen, S., Jaulent, M.-C. and Bousquet, C. (2024), "Scientific evidence and specific context: leveraging large language models for health fact-checking", Online Information Review, Vol. 48 No. 7, pp. 1488-1514. https://doi.org/10.1108/OIR-02-2024-0111
Publisher
:Emerald Publishing Limited
Copyright © 2024, Emerald Publishing Limited