This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Search
Skip to Search Results- 2Representation Learning
- 1CLIP
- 1Contrastive Learning
- 1Language Modeling
- 1Machine Learning
- 1Modality Gap
-
Fall 2020
Language Modeling (LM) is often formulated as a next-word prediction problem over a large vocabulary, which makes it challenging. To effectively perform the task of next-word prediction, Long Short Term Memory networks (LSTMs) must keep track of many types of information. Some information is...
-
The Contrastive Gap: A New Perspective on the ‘Modality Gap’ in Multimodal Contrastive Learning
DownloadFall 2024
Learning jointly from images and texts using contrastive pre-training has emerged as an effective method to train large-scale models with a strong grasp of semantic image concepts. For instance, CLIP, pre-trained on a large corpus of web data, excels in tasks like zero-shot image classification,...