Forward Model Learning with an Entity-Based Representation for games

  • Author / Creator
    Yousefzadeh Khameneh, Nazanin
  • Reinforcement learning (RL) is a powerful way of solving sequential decision-making tasks in which the agent’s goal is to learn how to maximize its reward.RL approaches can be divided into 2 different categories: model-based approaches that learn with the help of a model of the environment, and model-free approaches where the agent only learns by maximizing its reward. However, training an RL agent in a complex environment requires a vast amount of training data which can be expensive in tasks where the agent is expected to operate close to humans. In this work, we explore building a simulated environment, using a new representation of game frames. We use this representation to pre-train an agent and transfer the learned policy to the real environment. Our model can simulate the changes of a real environment in response to an agent’s action and returns a reward value. Our major contribution is presenting a new entity-based representation for game frames and using the proposed representation as our baseline to train our virtual environment. Our work outperforms an existing method for learning a model of an environment with significantly less training data

  • Subjects / Keywords
  • Graduation date
    Fall 2021
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.