ArgMed-Agents: Explainable Clinical Decision Reasoning with Large Language Models via Argumentation Schemes

Shengxin Hong, Liang Xiao, Xin Zhang, Jianxia Chen
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Multiagent Systems (cs.MA), Symbolic Computation (cs.SC)
2024-03-10 00:00:00
There are two main barriers to using large language models (LLMs) in clinical reasoning. Firstly, while LLMs exhibit significant promise in Natural Language Processing (NLP) tasks, their performance in complex reasoning and planning falls short of expectations. Secondly, LLMs use uninterpretable methods to make clinical decisions that are fundamentally different from the clinician's cognitive processes. This leads to user distrust. In this paper, we present a multi-agent framework called ArgMed-Agents, which aims to enable LLM-based agents to make explainable clinical decision reasoning through interaction. ArgMed-Agents performs self-argumentation iterations via Argumentation Scheme for Clinical Decision (a reasoning mechanism for modeling cognitive processes in clinical reasoning), and then constructs the argumentation process as a directed graph representing conflicting relationships. Ultimately, Reasoner(a symbolic solver) identify a series of rational and coherent arguments to support decision. ArgMed-Agents enables LLMs to mimic the process of clinical argumentative reasoning by generating explanations of reasoning in a self-directed manner. The setup experiments show that ArgMed-Agents not only improves accuracy in complex clinical decision reasoning problems compared to other prompt methods, but more importantly, it provides users with decision explanations that increase their confidence.
PDF: ArgMed-Agents: Explainable Clinical Decision Reasoning with Large Language Models via Argumentation Schemes.pdf
Empowered by ChatGPT