This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Theses and Dissertations
This collection contains theses and dissertations of graduate students of the University of Alberta. The collection contains a very large number of theses electronically available that were granted from 1947 to 2009, 90% of theses granted from 2009-2014, and 100% of theses granted from April 2014 to the present (as long as the theses are not under temporary embargo by agreement with the Faculty of Graduate and Postdoctoral Studies). IMPORTANT NOTE: To conduct a comprehensive search of all UofA theses granted and in University of Alberta Libraries collections, search the library catalogue at www.library.ualberta.ca - you may search by Author, Title, Keyword, or search by Department.
To retrieve all theses and dissertations associated with a specific department from the library catalogue, choose 'Advanced' and keyword search "university of alberta dept of english" OR "university of alberta department of english" (for example). Past graduates who wish to have their thesis or dissertation added to this collection can contact us at erahelp@ualberta.ca.
Items in this Collection
- 18White, Adam (Computing Science)
- 5White, Martha (Computing Science)
- 1Fyshe, Alona (Computing Science)
- 1Machado C., Marlos (Computing Science)
- 1Machado, Marlos C (Computing Science)
- 1Machado, Marlos C. (Computing Science)
- 1Chen, You Chen Eugene
- 1Coblin, Jordan Frederick
- 1Jacobsen, Andrew
- 1Li, Xin
- 1Liu, Puer
- 1McLeod, Matthew
- 10Reinforcement Learning
- 4reinforcement learning
- 2Experience Replay
- 2Planning
- 2Step-size adaptation
- 2Water Treatment
-
Fall 2022
In most, if not every, realistic sequential decision-making tasks, the decision-making agent is not able to model the full complexity of the world. In reinforcement learning, the environment is often much larger and more complex than the agent, a setting also known as partial observability. In...
-
Fall 2022
In this thesis, we investigate the empirical performance of several experience replay techniques. Efficient experience replay plays an important role in model-free reinforcement learning by improving sample efficiency through reusing past experience. However, replay-based methods were largely...
-
Spring 2020
Reinforcement Learning is a formalism for learning by trial and error. Unfortunately, trial and error can take a long time to find a solution if the agent does not efficiently explore the behaviours available to it. Moreover, how an agent ought to explore depends on the task that the agent is...
-
Fall 2021
Reinforcement learning (RL) is a learning paradigm focusing on how agents interact with an environment to maximize cumulative reward signals emitted from the environment. Exploration versus exploitation challenge is critical in RL research: the agent ought to trade off between taking the known...
-
Fall 2024
The sensitivity of reinforcement learning algorithm performance to hyperparameter choices poses a significant hurdle to the deployment of these algorithms in the real-world, where sampling can be limited by speed, safety, or other system constraints. To mitigate this, one approach is to learn a...
-
Fall 2023
In reinforcement learning (RL), agents learn to maximize a reward signal using nothing but observations from the environment as input to their decision making processes. Whether the agent is simple, consisting of only a policy that maps observations to actions, or complex, containing auxiliary...
-
Fall 2021
Learning auxiliary tasks, such as multiple predictions about the world, can provide many benets to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather...
-
Fall 2023
Evaluating and ranking the difficulty and enjoyment of puzzles is important in game design. Typically, such rankings are constructed manually for each specific game, which can be time consuming, subject to designer bias, and requires extensive play testing. An approach to ranking that generalizes...
-
Fall 2024
Planning and goal-conditioned reinforcement learning aim to create more efficient and scalable methods for complex, long-horizon tasks. These approaches break tasks into manageable subgoals and leverage prior knowledge to guide learning. However, learned models may predict inaccurate next states...
-
Fall 2022
Actor-Critics are a popular class of algorithms for control. Their ability to learn complex behaviours in continuous-action environments make them directly applicable to many real-world scenarios. These algorithms are composed of two parts - a critic and an actor. The critic learns to critique...