Explainable Pathfinding for Inscrutable Planners with Inductive Logic Programming
The complexity of the solutions that artificial intelligence can learn to solve problems currently surpasses its ability to explain these solutions. In many domains, explainable solutions are a necessary condition while optimality is not. Therefore, we seek to constrain solutions to the space of solutions that can be explained to a human. To do this, we build on inductive logic programming (ILP) techniques that allow us to define robust background knowledge and inductive biases. By combining ILP with a given inscrutable planner, we are able to construct an explainable graph representing solutions to all states in the state space. This graph can then be summarized using a variety of methods such as hierarchical representations and simple if/else rules. We test our approach on Towers of Hanoi and discuss future work for applications to the Rubik’s cube.
Published in ICAPS 2022 Workshop on Explainable AI Planning, Summer 2022.
Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Agostinelli, F., Panta, R., Khandelwal, V., Srivastava, B., Muppasani, B. C., Lakkaraju, K., & Wu, D. (2022, August 22). Explainable Pathfinding for Inscrutable Planners with Inductive Logic Programming. ICAPS 2022 Workshop on Explainable AI Planning. https://openreview.net/forum?id=S44aSPW6lRa