Search
Skip to Search Results- 3Reinforcement Learning
- 1Actor-Expert
- 1Continuous action space
- 1Control
- 1Linear Function Approximation
- 1Machine Learning
-
Fall 2019
Q-learning can be difficult to use in continuous action spaces, because a difficult optimization has to be solved to find the maximal action. Some common strategies have been to discretize the action space, solve the maximization with a powerful optimizer at each step, restrict the functional...
-
Fall 2019
The Nerve Excitability Test (NET) is an electrodiagnostic test capable of non-invasive characterization of peripheral nerves in humans. It has utility in differentiating between healthy controls and subjects with peripheral nerve disorders. Full realization of the diagnostic potential of NET...
-
Fixed Point Propagation: A New Way To Train Recurrent Neural Networks Using Auxiliary Variables
DownloadFall 2019
Recurrent neural networks (RNNs), along with their many variants, provide a powerful tool for online prediction in partially observable problems. Two issues concerning RNNs, however, are the ability to capture long-term dependencies and long training times. There have been a variety of strategies...
-
Fall 2019
In this thesis, we investigate sparse representations in reinforcement learning. We begin by discussing catastrophic interference in reinforcement learning with function approximation, and empirically investigating difficulties of online reinforcement learning in both policy evaluation and...
-
Fall 2019
Policy evaluation, learning value functions, is an integral part of the reinforcement learning problem. In this thesis, I propose a neural network architecture, the Two-Timescale Network (TTN), for value function approximation which utilizes linear function approximation for the value function...