background
logo
ArxivPaperAI

Can we forget how we learned? Representing states in iterated belief revision}

Author:
Paolo Liberatore
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI)
journal:
--
date:
2023-05-15 16:00:00
Abstract
The three most common representations of states in iterated belief revision are compared: explicit, by levels and by history. The first is a connected preorder between models, the second is a list of formulae representing equivalence classes, the third is the sequence of the previous revisions. The latter depends on the revision semantics and on history rewriting, and the latter depends on the allowed rewritings. All mechanisms represent all possible states. A rewritten history of lexicographic revision is more efficient than the other considered representations in terms of size with arbitrary history rewritings. Establishing the redundancy of such a history is a mild rewriting. It is coNP-complete in the general case, and is hard even on histories of two revisions or revisions of arbitrary length of Horn formulae, and is polynomial on histories of two Horn formulae. A minor technical result is a polynomial-time algorithm for establishing whether a Horn formula is equivalent to the negation of another Horn formula.
PDF: Can we forget how we learned? Representing states in iterated belief revision}.pdf
Empowered by ChatGPT