Usage
  • 13 views
  • 22 downloads

State Construction in Reinforcement Learning

  • Author / Creator
    Rafiee, Banafsheh
  • In reinforcement learning, the notion of state plays a central role. A reinforcement learning agent requires the state to evaluate its current situation, select actions, and construct a model of the environment. In the classic setting, it is assumed that the environment provides the agent with the state. However, in most cases of interest, the information received from the environment only provides partial information about the state of the environment.

    Ideally, the agent would construct the state directly from the data stream of its interaction with the environment. The prevalent approach to state construction is to train a large neural network with backpropagation and representing the state as the hidden state of recurrent neural networks or the last hidden layer of feed-forward networks. Building upon this approach, the existing solution methods have made a lot of progress. However, they remain limited due to several reasons such as the problem of loss of plasticity in neural networks.

    The first contribution of this thesis is the proposal of three diagnostic benchmarks inspired by animal learning for studying state construction.
    The diagnostic benchmarks have a simple setting: there are only a few signals to make predictions about. However, they are complicated because complex computational models are required to solve them. The proposed benchmarks include knobs for controlling the level of difficulty of the problem.

    The second contribution of this thesis is empirical. We conduct a comprehensive empirical study of the prominent recurrent learning methods, illuminating some of the limitations of existing solution methods. The empirical study suggests that: 1) None of the methods are fully satisfactory. 2) Recurrent neural networks (RNNs) can be expensive in terms of memory and computation. 3) RNNs trained with truncated backpropagation through time are sensitive to the truncation parameter. 4) Augmenting RNNs with stimulating traces of the observation signals can make T-BPTT less sensitive to the truncation parameter.

    The third contribution of the thesis is on the topic of auxiliary task discovery. Learning about tasks auxiliary to the main task of maximizing the sum of discounted rewards can assist state construction. It would be appealing if the agent could discover useful auxiliary tasks automatically. In this work, we propose a method for auxiliary task discovery based on the idea of generate-and-test. Our proposed method continually generates auxiliary tasks, evaluates them, and replaces the useless auxiliary tasks with newly generated ones. We show the efficacy of the proposed method empirically.

  • Subjects / Keywords
  • Graduation date
    Spring 2024
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/r3-f49v-9s42
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.