Explaining Explanations in Probabilistic Logic Programming

Germán Vidal
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Programming Languages (cs.PL)
2024-01-30 00:00:00
The emergence of tools based on artificial intelligence has also led to the need of producing explanations which are understandable by a human being. In some approaches, the system is not transparent (often referred to as a "black box"), making it difficult to generate appropriate explanations. In this work, though, we consider probabilistic logic programming, a combination of logic programming (for knowledge representation) and probability (to model uncertainty). In this setting, one can say that models are interpretable, which eases its understanding. However, given a particular query, the usual notion of "explanation" is associated with a set of choices, one for each random variable of the model. Unfortunately, this set does not have a causal structure and, in fact, some of the choices are actually irrelevant to the considered query. In order to overcome these shortcomings, we present an approach to explaining explanations which is based on the definition of a query-driven inference mechanism for probabilistic logic programs.
PDF: Explaining Explanations in Probabilistic Logic Programming.pdf
Empowered by ChatGPT