This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Theses and Dissertations
This collection contains theses and dissertations of graduate students of the University of Alberta. The collection contains a very large number of theses electronically available that were granted from 1947 to 2009, 90% of theses granted from 2009-2014, and 100% of theses granted from April 2014 to the present (as long as the theses are not under temporary embargo by agreement with the Faculty of Graduate and Postdoctoral Studies). IMPORTANT NOTE: To conduct a comprehensive search of all UofA theses granted and in University of Alberta Libraries collections, search the library catalogue at www.library.ualberta.ca - you may search by Author, Title, Keyword, or search by Department.
To retrieve all theses and dissertations associated with a specific department from the library catalogue, choose 'Advanced' and keyword search "university of alberta dept of english" OR "university of alberta department of english" (for example). Past graduates who wish to have their thesis or dissertation added to this collection can contact us at erahelp@ualberta.ca.
Items in this Collection
- 76Reinforcement Learning
- 17Machine Learning
- 8Artificial Intelligence
- 6Transfer Learning
- 5Planning
- 5Representation Learning
- 1Abbasi-Yadkori, Yasin
- 1Aghakasiri, Kiarash
- 1Alikhasi, Mahdi
- 1Asadi Atui, Kavosh
- 1Banafsheh Rafiee
- 1Behboudian, Paniz
-
Leveraging Large Language Models for Speeding Up Local Search Algorithms for Computing Programmatic Best Responses
DownloadFall 2024
Despite having advantages such as generalizability and interpretability over neural representations, programmatic representations of hypotheses and strategies face significant challenges. This is because algorithms writing programs encoding hypotheses for solving supervised learning problems and...
-
Fall 2023
Partial observability---when the senses lack enough detail to make an optimal decision---is the reality of any decision making agent acting in the real world. While an agent could be made to make due with its available senses, taking advantage of the history of senses can provide more context and...
-
Spring 2022
Reinforcement learning (RL) has shown great success in solving many challenging tasks via the use of deep neural networks. Although the use of deep learning for RL brings immense representational power to the arsenal, it also causes sample inefficiency. This means that the algorithms are...
-
Fall 2022
Monte Carlo Tree Search (MCTS) is a popular tree search framework for choos- ing actions in decision-making problems. MCTS is traditionally applied to applications in which a perfect simulation model is available. However, when the model is imperfect, the performance of MCTS drops heavily. In...
-
Fall 2022
Monte Carlo Tree Search (MCTS) is an extremely successful search-based frame- work for decision making. With an accurate simulator of the environment’s dynamics, it can achieve great performance in many games and non-games applications. However, without a perfect simulator, the performance...
-
Fall 2021
The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous,...
-
Fall 2021
The optimization of non-convex objective functions is a topic of central interest in machine learning. Remarkably, it has recently been shown that simple gradient-based optimization can achieve globally optimal solutions in important non-convex problems that arise in machine learning, including...
-
On Efficient Planning in Large Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning
DownloadFall 2023
A practical challenge in reinforcement learning is large action spaces that make planning computationally demanding. For example, in cooperative multi-agent reinforcement learning, a potentially large number of agents jointly optimize a global reward function, which leads to a blow-up in the...
-
Spring 2024
In machine learning, sparse neural networks provide higher computational efficiency and in some cases, can perform just as well as fully-connected networks. In the online and incremental reinforcement learning (RL) problem, Prediction Adapted Networks (Martin and Modayil, 2021) is an algorithm...