Search
Skip to Search Results- 22White, Martha (Computing Science)
- 3White, Adam (Computing Science)
- 1Bowling, Michael (Computing Science)
- 1Farahmand, Amir-massoud (Computer Science, University of Toronto)
- 1Fyshe, Alona (Computing Science)
- 1Greiner, Russell (Computing Science)
- 10Reinforcement Learning
- 5Machine Learning
- 3Neural Networks
- 2Dyna
- 2Model-based Reinforcement Learning
- 2Reinforcement learning
-
Fall 2019
In this thesis, we investigate different vector step-size adaptation approaches for continual, online prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad,...
-
Spring 2019
In this thesis we introduce a new loss for regression, the Histogram Loss. There is some evidence that, in the problem of sequential decision making, estimating the full distribution of return offers a considerable gain in performance, even though only the mean of that distribution is used in...
-
Fall 2020
For artificially intelligent learning systems to be deployed widely in real-world settings, it is important that they be able to operate decentrally. Unfortunately, decentralized control is challenging. Even finding approximately optimal joint policies of decentralized partially observable Markov...
-
Chasing Hallucinated Value: A Pitfall of Dyna Style Algorithms with Imperfect Environment Models
DownloadSpring 2020
In Dyna style algorithms, reinforcement learning (RL) agents use a model of the environment to generate simulated experience. By updating on this simulated experience, Dyna style algorithms allow agents to potentially learn control policies in fewer environment interactions than agents that use...
-
Spring 2020
In model-based reinforcement learning, planning with an imperfect model of the environment has the potential to harm learning progress. But even when a model is imperfect, it may still contain information that is useful for planning. In this thesis, we investigate the idea of using an imperfect...
-
Greedification Operators for Policy Optimization: Investigating Forward and Reverse KL Divergences
DownloadFall 2020
Policy gradient methods typically estimate both explicit policy and value functions. The long-extant view of policy gradient methods as approximate policy iteration---alternating between policy evaluation and policy improvement by greedification---is a helpful framework to elucidate algorithmic...
-
Spring 2020
Mapping the macrostructural connectivity of the living human brain is one of the primary goals of neuroscientists who study connectomics. The reconstruction of a brain's structural connectivity, aka its connectome, typically involves applying expert analysis to diffusion-weighted magnetic...
-
Spring 2020
Reinforcement Learning is a formalism for learning by trial and error. Unfortunately, trial and error can take a long time to find a solution if the agent does not efficiently explore the behaviours available to it. Moreover, how an agent ought to explore depends on the task that the agent is...
-
Fall 2021
Structural credit assignment in neural networks is a long-standing problem, with a variety of alternatives to backpropagation proposed to allow for local training of nodes. One of the early strategies was to treat each node as an agent and use a reinforcement learning method called REINFORCE to...
-
Fall 2021
A common scientific challenge for putting a reinforcement learning agent into practice is how to improve sample efficiency as much as possible with limited computational or memory resources. Such available physical resources may vary in different applications. My thesis introduces some approaches...