This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Theses and Dissertations
This collection contains theses and dissertations of graduate students of the University of Alberta. The collection contains a very large number of theses electronically available that were granted from 1947 to 2009, 90% of theses granted from 2009-2014, and 100% of theses granted from April 2014 to the present (as long as the theses are not under temporary embargo by agreement with the Faculty of Graduate and Postdoctoral Studies). IMPORTANT NOTE: To conduct a comprehensive search of all UofA theses granted and in University of Alberta Libraries collections, search the library catalogue at www.library.ualberta.ca - you may search by Author, Title, Keyword, or search by Department.
To retrieve all theses and dissertations associated with a specific department from the library catalogue, choose 'Advanced' and keyword search "university of alberta dept of english" OR "university of alberta department of english" (for example). Past graduates who wish to have their thesis or dissertation added to this collection can contact us at erahelp@ualberta.ca.
Items in this Collection
- 4reinforcement learning
- 1CCEM
- 1actor-critic
- 1agent state
- 1conditional cross-entropy optimization
- 1cross-entropy optimization
-
Fall 2022
In most, if not every, realistic sequential decision-making tasks, the decision-making agent is not able to model the full complexity of the world. In reinforcement learning, the environment is often much larger and more complex than the agent, a setting also known as partial observability. In...
-
Fall 2021
Reinforcement learning (RL) is a learning paradigm focusing on how agents interact with an environment to maximize cumulative reward signals emitted from the environment. Exploration versus exploitation challenge is critical in RL research: the agent ought to trade off between taking the known...
-
Fall 2022
Actor-Critics are a popular class of algorithms for control. Their ability to learn complex behaviours in continuous-action environments make them directly applicable to many real-world scenarios. These algorithms are composed of two parts - a critic and an actor. The critic learns to critique...
-
Fall 2023
The transformer architecture is effective in processing sequential data, both because of its ability to leverage parallelism, and because of its self-attention mechanism capable of capturing long-range dependencies. However, the self-attention mechanism is slow for streaming data, that is when...