Usage
  • 140 views
  • 162 downloads

Continual Auxiliary Task Learning

  • Author / Creator
    McLeod, Matthew
  • Learning auxiliary tasks, such as multiple predictions about the world, can provide many bene ts to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this thesis, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy that learns to take actions to improve the auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behaviour learner. We develop an algorithm based on successor features that facilitates tracking under nonstationary rewards and propose how behaviour can be specialized to learn areas of interest for a prediction learner. We conduct an in-depth study into the resulting multi-prediction learning system.

  • Subjects / Keywords
  • Graduation date
    Fall 2021
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-6v6x-2320
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.