Search
Skip to Search Results- 19Planning
- 12Heuristic Search
- 7Artificial Intelligence
- 6Abstractions
- 6Reinforcement Learning
- 3Machine Learning
- 1Abdullah
- 1Asadi Atui, Kavosh
- 1Barriga Richards, Nicolas A
- 1Brown, Jennifer A.
- 1Faid, Julian TW
- 1Fan, Gaojian
-
Spring 2016
This thesis proposes, analyzes and tests different exploration-based techniques in Greedy Best-First Search (GBFS) for satisficing planning. First, we show the potential of exploration-based techniques by combining GBFS and random walk exploration locally. We then conduct deep analysis on how...
-
Fall 2022
This thesis investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives,...
-
Fall 2017
Modern board, card, and video games are challenging domains for AI research due to their complex game mechanics and large state and action spaces. For instance, in Hearthstone — a popular collectible card (CC) (video) game developed by Blizzard Entertainment — two players first construct their...
-
Integral Urbanism: Investigating the Materiality and Spatiality of the University of Alberta Quadrangle
DownloadFall 2015
The university quadrangle is a space that exists on the majority of North American campuses, yet detailed investigation into the creation, existence and perpetuation of the quadrangle has been minimal. Considering how universities look to distinguish themselves from one another in search of the...
-
Spring 2014
Coach learning is a key component for developing quality coaches. While researchers have identified many ways that coaches learn, there is little agreement as to how coaches learn best. As a way of examining these discrepancies found in the research, this study’s aim was to explore how Canadian...
-
Spring 2016
In model-based reinforcement learning a model is learned which is then used to find good actions. What model to learn? We investigate these questions in the context of two different approaches to model-based reinforcement learning. We also investigate how one should learn and plan when the reward...
-
Fall 2012
The earthwork operations for reclamation add challenges and complications to common earthworks schedule and aspects such as placement locations and hauling routes…etc. The reclamation earthworks require that the soil layers structure before disturbing the land must remain the same after...
-
Fall 2022
Monte Carlo Tree Search (MCTS) is a popular tree search framework for choos- ing actions in decision-making problems. MCTS is traditionally applied to applications in which a perfect simulation model is available. However, when the model is imperfect, the performance of MCTS drops heavily. In...
-
Fall 2012
This thesis consists of two parts. First, we invented an abstraction framework called multimapping which allows multiple admissible heuristic values to be extracted from one abstract space. The key idea of this technique is to design a multimapping function which maps one state in the original...
-
On Efficient Planning in Large Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning
DownloadFall 2023
A practical challenge in reinforcement learning is large action spaces that make planning computationally demanding. For example, in cooperative multi-agent reinforcement learning, a potentially large number of agents jointly optimize a global reward function, which leads to a blow-up in the...