State Generalization in UCT

  • Author / Creator
    Sriram, Srinivasan
  • In this thesis, I study the problem of Monte-Carlo Planning in deterministic do- mains with sparse rewards. A popular algorithm in this suite, UCT, is studied. A new algorithm to incorporate state generalization in UCT using estimates of sim- ilar nodes and a distance metric is presented. The algorithm’s correctness and asymptotic convergence to optimality under certain conditions on the domain is also shown. A second contribution in this thesis includes an algorithm for learn- ing a local manifold of the state space when the state space does not have a natural distance metric and use it for state generalization in UCT. The effectiveness of the algorithm is studied by measuring its performance on multiple domains inspired by video games. Empirical evidence shows that the new algorithm is more sample efficient than UCT, especially on sparse reward games.

  • Subjects / Keywords
  • Graduation date
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
  • Language
  • Institution
    University of Alberta
  • Degree level
  • Department
    • Department of Computing Science
  • Supervisor / co-supervisor and their department(s)
    • Erik Talvitie (Franklin & Marshall College)
    • Michael Bowling (Computing Science)
  • Examining committee members and their departments
    • Csaba Szepesva ́ri (Computing Science)
    • Michael Bowling (Computing Science)
    • Vadim Bulitko (Computing Science)
    • Erik Talvitie (Franklin & Marshall College)