skip to main content
Primo Advanced Search
Primo Advanced Search Query Term
Primo Advanced Search prefilters

Explainability in deep reinforcement learning

Heuillet, Alexandre ; Couthouis, Fabien ; Díaz-Rodríguez, Natalia

Knowledge-based systems, 2021-02, Vol.214, p.106685, Article 106685 [Periódico revisado por pares]

Amsterdam: Elsevier B.V

Texto completo disponível

Citações Citado por
  • Título:
    Explainability in deep reinforcement learning
  • Autor: Heuillet, Alexandre ; Couthouis, Fabien ; Díaz-Rodríguez, Natalia
  • Assuntos: Algorithms ; Artificial Intelligence ; Artificial neural networks ; Computer Science ; Computer Vision and Pattern Recognition ; Deep Learning ; Explainable artificial intelligence ; Machine Learning ; Multiagent Systems ; Neural and Evolutionary Computing ; Reinforcement Learning ; Representation learning ; Responsible artificial intelligence
  • É parte de: Knowledge-based systems, 2021-02, Vol.214, p.106685, Article 106685
  • Descrição: A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image source data. However, assessing how XAI techniques can help understand models beyond classification tasks, e.g. for reinforcement learning (RL), has not been extensively studied. We review recent works in the direction to attain Explainable Reinforcement Learning (XRL), a relatively new subfield of Explainable Artificial Intelligence, intended to be used in general public applications, with diverse audiences, requiring ethical, responsible and trustable algorithms. In critical situations where it is essential to justify and explain the agent’s behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box. We evaluate mainly studies directly linking explainability to RL, and split these into two categories according to the way the explanations are generated: transparent algorithms and post-hoc explainability. We also review the most prominent XAI works from the lenses of how they could potentially enlighten the further deployment of the latest advances in RL, in the demanding present and future of everyday problems. •We review concepts related to the explainability of Deep Reinforcement Learning models.•We provide a comprehensive analysis of the Explainable Reinforcement Learning literature.•We propose a categorization of existing Explainable Reinforcement Learning methods.•We discuss ideas emerging from the literature and provide insights for future work.
  • Editor: Amsterdam: Elsevier B.V
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.