Usage
  • 96 views
  • 114 downloads

Effective Transfer Learning with the Use of Distance Metrics

  • Author / Creator
    Shiva Soleimany Dizicheh
  • Reinforcement learning (RL) offers agents a framework for learning to perform hard-to-engineer behaviors that other machine learning (ML) approaches cannot due to the complex nature of these problems. However, it is impractical to learn a complex task from scratch due to reasons such as the huge sample complexity of RL algorithms, experience feasibility in dangerous setups, or the need for long periods of training in order for the algorithms to converge. The agent’s training can further be hindered by the great difficulty of the target task, poor state representation, or sparse reward signals.
    Transfer learning is the area of research concerned with the class of methods that seek to speed up the training of RL agents by transferring the knowledge that the agent has gained through one or more source task Markov decision processes (MDP) to the target task. Transfer learning can eliminate the need for training from scratch every time the environment changes slightly and help the agent to make use of its past experiences in similar domains. However, transfer learning may inadvertently hurt the target performance, a phenomenon known as negative transfer. Therefore, having a metric to approximately measure the similarity between the source task and the goal task can help us to pick our source task more wisely and perform better on the goal task. The transfer learning literature includes different metrics to measure
    the level of similarity between MDPs; among them are distance
    metrics based on the averaged difference between the corresponding state-action transition distributions of the two tasks or based on graph-similarity between the graphs representing the transition and the reward functions of the source task and the target task. In this work, we look into three similarity metrics and their ability to estimate the similarity between two MDPs. All three metrics are based on the distance of state-action spaces between two MDPs. The first two metrics are based on the transitions spaces, but they focus on the action space and
    the state space separately. The third metric focuses on the difference between the immediate reward values of the state-action pairs in the source and target tasks. After pre-training on source tasks and then performing transfer learning, we look into the predictive capacities of each metric of the agent’s performance on the goal task in two OpenAI gym domains: Hopper and Pendulum. The thesis is organized in a matter to first present the needed background about the used algorithms and environments, then after explaining the process for calculating each metric, present the result of the experiments and analyze
    them. In the last part of the thesis, we pose questions as guidelines for future works in order to find the reason for the observed results.

  • Subjects / Keywords
  • Graduation date
    Spring 2022
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-yfwh-4031
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.