Usage
  • 155 views
  • 193 downloads

No More Pesky Hyperparameters: Offline Hyperparameter Tuning For Reinforcement Learning

  • Author / Creator
    Sakhadeo, Archit
  • The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We propose a new approach to tune hyperparameters from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model using several hyperparameters. We evaluate the hyperparameters inside the calibration model based on some desirable performance criterion, and then identify promising hyperparameters for deployment. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails. We demonstrate that tuning hyperparameters offline and deploying an RL agent with these hyperparameters is a more feasible problem to tackle than transferring a fixed policy learned from the offline data.

  • Subjects / Keywords
  • Graduation date
    Fall 2021
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-t3y1-d119
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.