Search results
1 – 1 of 1Zhenni Ni, Yuxing Qian, Shuaipu Chen, Marie-Christine Jaulent and Cedric Bousquet
This study aims to evaluate the performance of LLMs with various prompt engineering strategies in the context of health fact-checking.
Abstract
Purpose
This study aims to evaluate the performance of LLMs with various prompt engineering strategies in the context of health fact-checking.
Design/methodology/approach
Inspired by Dual Process Theory, we introduce two kinds of prompts: Conclusion-first (System 1) and Explanation-first (System 2), and their respective retrieval-augmented variations. We evaluate the performance of these prompts across accuracy, argument elements, common errors and cost-effectiveness. Our study, conducted on two public health fact-checking datasets, categorized 10,212 claims as knowledge, anecdotes and news. To further analyze the reasoning process of LLM, we delve into the argument elements of health fact-checking generated by different prompts, revealing their tendencies in using evidence and contextual qualifiers. We conducted content analysis to identify and compare the common errors across various prompts.
Findings
Results indicate that the Conclusion-first prompt performs well in knowledge (89.70%,66.09%), anecdote (79.49%,79.99%) and news (85.61%,85.95%) claims even without retrieval augmentation, proving to be cost-effective. In contrast, the Explanation-first prompt often classifies claims as unknown. However, it significantly boosts accuracy for news claims (87.53%,88.60%) and anecdote claims (87.28%,90.62%) with retrieval augmentation. The Explanation-first prompt is more focused on context specificity and user intent understanding during health fact-checking, showing high potential with retrieval augmentation. Additionally, retrieval-augmented LLMs concentrate more on evidence and context, highlighting the importance of the relevance and safety of retrieved content.
Originality/value
This study offers insights into how a balanced integration could enhance the overall performance of LLMs in critical applications, paving the way for future research on optimizing LLMs for complex cognitive tasks.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-02-2024-0111
Details