background
logo
ArxivPaperAI

A Statistical Framework for Measuring AI Reliance

Author:
Ziyang Guo, Yifan Wu, Jason Hartline, Jessica Hullman
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Human-Computer Interaction (cs.HC)
journal:
--
date:
2024-01-27 00:00:00
Abstract
Humans frequently make decisions with the aid of artificially intelligent (AI) systems. A common pattern is for the AI to recommend an action to the human who retains control over the final decision. Researchers have identified ensuring that a human has appropriate reliance on an AI as a critical component of achieving complementary performance. We argue that the current definition of appropriate reliance used in such research lacks formal statistical grounding and can lead to contradictions. We propose a formal definition of reliance, based on statistical decision theory, which separates the concepts of reliance as the probability the decision-maker follows the AI's prediction from challenges a human may face in differentiating the signals and forming accurate beliefs about the situation. Our definition gives rise to a framework that can be used to guide the design and interpretation of studies on human-AI complementarity and reliance. Using recent AI-advised decision making studies from literature, we demonstrate how our framework can be used to separate the loss due to mis-reliance from the loss due to not accurately differentiating the signals. We evaluate these losses by comparing to a baseline and a benchmark for complementary performance defined by the expected payoff achieved by a rational agent facing the same decision task as the behavioral agents.
PDF: A Statistical Framework for Measuring AI Reliance.pdf
Empowered by ChatGPT