Download PDFOpen PDF in browser

Interpretable Model-based Hierarchical Reinforcement Learning Using Inductive Logic Programming

EasyChair Preprint 5668, version 2

Versions: 12history
14 pagesDate: June 23, 2021

Abstract

Recently deep reinforcement learning has achieved tremendous success in wide ranges of applications. However, it notoriously lacks data-efficiency and interpretability. Data-efficiency is important as interacting with the environment is expensive. Further, interpretability can increase the transparency of the black-box-style deep RL models and hence gain trust from the users. In this work, we propose a new hierarchical framework via symbolic RL, leveraging a symbolic transition model to improve the data-efficiency and introduce the interpretability for learned policy. This framework consists of a high-level agent, a subtask solver and a symbolic transition model. Without assuming any prior knowledge on the state transition, we adopt inductive logic programming (ILP) to learn the rules of symbolic state transitions, introducing interpretability and making the learned behavior understandable to users. In empirical experiments, we confirmed that the proposed framework offers approximately between 30\% to 40\% more data efficiency over previous methods.

Keyphrases: Inductive Logic Programming, Reinforcement Learning, hierarchical learning, planning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:5668,
  author    = {Duo Xu and Faramarz Fekri},
  title     = {Interpretable Model-based Hierarchical Reinforcement Learning Using Inductive Logic Programming},
  howpublished = {EasyChair Preprint 5668},
  year      = {EasyChair, 2021}}
Download PDFOpen PDF in browser