background
logo
ArxivPaperAI

Logic for Explainable AI

Author:
Adnan Darwiche
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Logic in Computer Science (cs.LO)
journal:
--
date:
2023-05-08 16:00:00
Abstract
A central quest in explainable AI relates to understanding the decisions made by (learned) classifiers. There are three dimensions of this understanding that have been receiving significant attention in recent years. The first dimension relates to characterizing conditions on instances that are necessary and sufficient for decisions, therefore providing abstractions of instances that can be viewed as the "reasons behind decisions." The next dimension relates to characterizing minimal conditions that are sufficient for a decision, therefore identifying maximal aspects of the instance that are irrelevant to the decision. The last dimension relates to characterizing minimal conditions that are necessary for a decision, therefore identifying minimal perturbations to the instance that yield alternate decisions. We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions which is based on some recent developments in symbolic logic. The tutorial will also discuss how this theory is particularly applicable to non-symbolic classifiers such as those based on Bayesian networks, decision trees, random forests and some types of neural networks.
PDF: Logic for Explainable AI.pdf
Empowered by ChatGPT