ERA

Download the full-sized PDF of Adaptive Representation for Policy GradientDownload the full-sized PDF

Analytics

Share

Permanent link (DOI): https://doi.org/10.7939/R38Q2B

Download

Export to: EndNote  |  Zotero  |  Mendeley

Communities

This file is in the following communities:

Graduate Studies and Research, Faculty of

Collections

This file is in the following collections:

Theses and Dissertations

Adaptive Representation for Policy Gradient Open Access

Descriptions

Other title
Subject/Keyword
Representation Learning
Decision Trees
Policy Gradient
Reinforcement Learning
Type of item
Thesis
Degree grantor
University of Alberta
Author or creator
Das Gupta, Ujjwal
Supervisor and department
Talvitie, Erik (Computing Science)
Bowling, Michael (Computing Science)
Examining committee member and department
Talvitie, Erik (Computing Science)
Hoover, H. James (Computing Science)
Sutton, Richard S. (Computing Science)
Bowling, Michael (Computing Science)
Department
Department of Computing Science
Specialization
Statistical Machine Learning
Date accepted
2015-03-02T10:01:25Z
Graduation date
2015-06
Degree
Master of Science
Degree level
Master's
Abstract
Much of the focus on finding good representations in reinforcement learning has been on learning complex non-linear predictors of value. Methods like policy gradient, that do not learn a value function and instead directly represent policy, often need fewer parameters to learn good policies. However, they typically employ a fixed parametric representation that may not be sufficient for complex domains. This thesis introduces two algorithms which can learn an adaptive representation of policy: the Policy Tree algorithm, which learns a decision tree over different instantiations of a base policy, and the Policy Conjunction algorithm, which adds conjunctive features to any base policy that uses a linear feature representation. In both of these algorithms, policy gradient is used to grow the representation in a way that enables the maximum local increase in the expected return of the policy. Experiments show that these algorithms can choose genuinely helpful splits or features, and significantly improve upon the commonly used linear Gibbs softmax policy, which is chosen as the base policy.
Language
English
DOI
doi:10.7939/R38Q2B
Rights
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication
Ujjwal Das Gupta, Erik Talvitie, Michael Bowling. (2015). "Policy Tree: Adaptive Representation for Policy Gradient". Proceedings of AAAI15: Twenty-Ninth Conference on Artificial Intelligence.

File Details

Date Uploaded
Date Modified
2015-06-15T07:07:58.277+00:00
Audit Status
Audits have not yet been run on this file.
Characterization
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 1973857
Last modified: 2015:10:21 00:56:15-06:00
Filename: Das Gupta_Ujjwal_201501_MSc.pdf
Original checksum: ebfad992e186e3b9c31cb0872056c189
Well formed: true
Valid: true
File title: Untitled
Page count: 47
Activity of users you follow
User Activity Date