background
logo
ArxivPaperAI

Explanation through Reward Model Reconciliation using POMDP Tree Search

Author:
Benjamin D. Kraske, Anshu Saksena, Anna L. Buczak, Zachary N. Sunberg
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Human-Computer Interaction (cs.HC), Machine Learning (cs.LG)
journal:
--
date:
2023-04-30 16:00:00
Abstract
As artificial intelligence (AI) algorithms are increasingly used in mission-critical applications, promoting user-trust of these systems will be essential to their success. Ensuring users understand the models over which algorithms reason promotes user trust. This work seeks to reconcile differences between the reward model that an algorithm uses for online partially observable Markov decision (POMDP) planning and the implicit reward model assumed by a human user. Action discrepancies, differences in decisions made by an algorithm and user, are leveraged to estimate a user's objectives as expressed in weightings of a reward function.
PDF: Explanation through Reward Model Reconciliation using POMDP Tree Search.pdf
Empowered by ChatGPT