Usage
  • 137 views
  • 211 downloads

Representation Analysis of Deep Reinforcement Learning algorithms in Robotic Environments

  • Author / Creator
    Taghian Jazi, Mehran
  • The rise of Deep Learning (DL) and its assistance in learning complex feature representations significantly impacted Reinforcement Learning (RL). Deep Reinforcement Learning (DRL) made it possible to apply RL to complex real-world problems and even achieve human-level performance. One of these problems is related to robotics. Recently, DRL agents successfully learned optimal behavior in a range of robotic environments. The policy can provide much information from its learned representation. However, this policy is approximated using a neural network and, therefore, is a black box.

    Explainable Artificial Intelligence (XAI) is a new AI subfield focusing on interpreting Machine Learning models' behavior. A large part of XAI's literature has emerged on feature relevance techniques to explain a deep neural network (DNN) output processing on images. These techniques have been extended to explain Graph classification tasks using Graph Networks (GN). Nevertheless, these methods haven't been exploited to analyze the DRL agent's behavior learned to perform in a robotic environment.

    In this work, we proposed to analyze the representation learned by a DRL agent's policy in a robotic environment. We use graph structure to represent the robot's observation in an entity-relationship manner and graph neural networks as function approximators in DRL. For the interpretation phase, an explainability technique called Layer-wise Relevance Propagation (LRP), a feature relevance technique that had been successfully applied to explain image and graph classification tasks, is used to interpret the learned policy. We evaluate the information provided by the LRP on two simulated robotic environments on MuJoCo. The experiments and evaluation methods were delicately designed to effectively measure the value of knowledge gained by our approach to analyzing learned representations in the Deep Reinforcement Learning task.

  • Subjects / Keywords
  • Graduation date
    Fall 2022
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-7f0x-g135
  • License
    This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.