background
logo
ArxivPaperAI

Towards Safe and Aligned Large Language Models for Medicine

Author:
Tessa Han, Aounon Kumar, Chirag Agarwal, Himabindu Lakkaraju
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI)
journal:
--
date:
2024-03-06 00:00:00
Abstract
The capabilities of large language models (LLMs) have been progressing at a breathtaking speed, leaving even their own developers grappling with the depth of their potential and risks. While initial steps have been taken to evaluate the safety and alignment of general-knowledge LLMs, exposing some weaknesses, to our knowledge, the safety and alignment of medical LLMs has not been evaluated despite their risks for personal health and safety, public health and safety, and human rights. To this end, we carry out the first safety evaluation for medical LLMs. Specifically, we set forth a definition of medical safety and alignment for medical artificial intelligence systems, develop a dataset of harmful medical questions to evaluate the medical safety and alignment of an LLM, evaluate both general and medical safety and alignment of medical LLMs, demonstrate fine-tuning as an effective mitigation strategy, and discuss broader, large-scale approaches used by the machine learning community to develop safe and aligned LLMs. We hope that this work casts light on the safety and alignment of medical LLMs and motivates future work to study it and develop additional mitigation strategies, minimizing the risks of harm of LLMs in medicine.
PDF: Towards Safe and Aligned Large Language Models for Medicine.pdf
Empowered by ChatGPT