This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Search
Skip to Search Results- 3Reinforcement Learning
- 1Actor-Expert
- 1Continuous action space
- 1Control
- 1Linear Function Approximation
- 1Nonlinear Function Approximation
-
Fall 2019
Q-learning can be difficult to use in continuous action spaces, because a difficult optimization has to be solved to find the maximal action. Some common strategies have been to discretize the action space, solve the maximization with a powerful optimizer at each step, restrict the functional...
-
Fall 2019
In this thesis, we investigate sparse representations in reinforcement learning. We begin by discussing catastrophic interference in reinforcement learning with function approximation, and empirically investigating difficulties of online reinforcement learning in both policy evaluation and...
-
Fall 2019
Policy evaluation, learning value functions, is an integral part of the reinforcement learning problem. In this thesis, I propose a neural network architecture, the Two-Timescale Network (TTN), for value function approximation which utilizes linear function approximation for the value function...