ERA

Download the full-sized PDF of Strengths, Weaknesses, and Combinations of Model-based and Model-free Reinforcement LearningDownload the full-sized PDF

Analytics

Share

Permanent link (DOI): https://doi.org/10.7939/R3C24QW38

Download

Export to: EndNote  |  Zotero  |  Mendeley

Communities

This file is in the following communities:

Graduate Studies and Research, Faculty of

Collections

This file is in the following collections:

Theses and Dissertations

Strengths, Weaknesses, and Combinations of Model-based and Model-free Reinforcement Learning Open Access

Descriptions

Other title
Subject/Keyword
Model-based
Planning
Artificial Intelligence
Reinforcement Learning
Type of item
Thesis
Degree grantor
University of Alberta
Author or creator
Asadi Atui, Kavosh
Supervisor and department
Sutton, Richard (Computing Science)
Examining committee member and department
Sutton, Richard (Computing Science)
Müller, Martin (Computing Science)
Bowling, Michael (Computing Science)
Department
Department of Computing Science
Specialization

Date accepted
2015-10-14T11:32:47Z
Graduation date
2016-06
Degree
Master of Science
Degree level
Master's
Abstract
Reinforcement learning algorithms are conventionally divided into two approaches: a model-based approach that builds a model of the environment and then computes a value function from the model, and a model-free approach that directly estimates the value function. The first contribution of this thesis is to demonstrate that, with similar computational resources, neither approach dominates the other. Explicitly, the model-based approach achieves a better performance with fewer environmental interactions, while the model-free approach reaches a more accurate solution asymptotically by using a larger representation or eligibility traces. The strengths offered by each approach are important for a reinforcement learning agent and, therefore, it is desirable to search for a combination of the two approaches and get the strengths of both. The main contribution of this thesis is to propose a new architecture in which a model-based algorithm forms an initial value function estimate and a model-free algorithm adds on to and improves the initial value function estimate. Experiments show that our architecture, called the Cascade Architecture, preserves the data efficiency of the model-based algorithm. Moreover, we prove that the Cascade Architecture converges to the original model-free solution and thus prevents any imperfect model from impairing the asymptotic performance. These results strengthen the case for combining model-based and model-free reinforcement learning.
Language
English
DOI
doi:10.7939/R3C24QW38
Rights
This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for the purpose of private, scholarly or scientific research. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
Citation for previous publication

File Details

Date Uploaded
Date Modified
2015-10-14T19:35:03.992+00:00
Audit Status
Audits have not yet been run on this file.
Characterization
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 2637817
Last modified: 2016:06:16 17:14:46-06:00
Filename: AsadiAtui_Kavosh_201510_MSc.pdf
Original checksum: 08389b173fb3f74f74acd9e2a20e0893
Well formed: true
Valid: true
File title: Untitled
Page count: 59
Activity of users you follow
User Activity Date