This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Search
Skip to Search Results- 11Representation Learning
- 5Reinforcement Learning
- 4Machine Learning
- 2Generate and Test
- 1CLIP
- 1Computer Vision
- 1Das Gupta, Ujjwal
- 1Elsayed, Mohamed
- 1Fahim, Abrar
- 1Kiros, Ryan J
- 1Kumaraswamy, Raksha K
- 1Lan, Qingfeng
-
Spring 2015
Much of the focus on finding good representations in reinforcement learning has been on learning complex non-linear predictors of value. Methods like policy gradient, that do not learn a value function and instead directly represent policy, often need fewer parameters to learn good policies....
-
Fall 2023
In reinforcement learning (RL), agents learn to maximize a reward signal using nothing but observations from the environment as input to their decision making processes. Whether the agent is simple, consisting of only a policy that maps observations to actions, or complex, containing auxiliary...
-
Spring 2023
Gradient Descent algorithms suffer many problems when learning representations using fixed neural network architectures, such as reduced plasticity on non-stationary continual tasks and difficulty training sparse architectures from scratch. A common workaround is continuously adapting the neural...
-
Fall 2022
Modern representation learning methods perform well on offline tasks and primarily revolve around batch updates. However, batch updates preclude those methods from focusing on new experience, which is essential for fast online adaptation. In this thesis, we study an online and incremental...
-
Learning Deep Representations, Embeddings and Codes from the Pixel Level of Natural and Medical Images
DownloadFall 2013
Significant research has gone into engineering representations that can identify high-level semantic structure in images, such as objects, people, events and scenes. Recently there has been a shift towards learning representations of images either on top of dense features or directly from the...
-
Fall 2020
Language Modeling (LM) is often formulated as a next-word prediction problem over a large vocabulary, which makes it challenging. To effectively perform the task of next-word prediction, Long Short Term Memory networks (LSTMs) must keep track of many types of information. Some information is...
-
Sample-Efficient Control with Directed Exploration in Discounted MDPs Under Linear Function Approximation
DownloadSpring 2022
An important goal of online reinforcement learning algorithms is efficient data collection to learn near-optimal behaviour, that is, optimizing the exploration-exploitation trade-off to reduce the sample-complexity of learning. To improve sample-complexity of learning it is essential that the...
-
Fall 2019
In this thesis, we investigate sparse representations in reinforcement learning. We begin by discussing catastrophic interference in reinforcement learning with function approximation, and empirically investigating difficulties of online reinforcement learning in both policy evaluation and...
-
The Contrastive Gap: A New Perspective on the ‘Modality Gap’ in Multimodal Contrastive Learning
DownloadFall 2024
Learning jointly from images and texts using contrastive pre-training has emerged as an effective method to train large-scale models with a strong grasp of semantic image concepts. For instance, CLIP, pre-trained on a large corpus of web data, excels in tasks like zero-shot image classification,...
-
Fall 2020
We explore the interplay of generate-and-test and gradient-descent techniques for solving online supervised learning problems. The task in supervised learning is to learn a function using samples of inputs to output pairs. This function is called the target function. The standard way to learn...