Download the full-sized PDF of Solving Association Problems with Convex Co-embeddingDownload the full-sized PDF



Permanent link (DOI):


Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Graduate Studies and Research, Faculty of


This file is in the following collections:

Theses and Dissertations

Solving Association Problems with Convex Co-embedding Open Access


Other title
Joint Embedding
Relation Learning
Link Prediction
Structured Output Prediction
Multilabel Classification
Knowledge Graph Completion
Convex Optimization
Association Learning
Association Problems
Embedding Inference
Constrained Co-embedding
Convex Co-embedding
Semantic Embedding
Type of item
Degree grantor
University of Alberta
Author or creator
Mirzazadeh, Farzaneh
Supervisor and department
Greiner, Russell (Computing Science)
Schuurmans, Dale (Computing Science)
Examining committee member and department
Bowling, Michael (Computing Science)
Zemel, Richard (Computer Science, University of Toronto)
Szepesvari, Csaba (Computing Science)
Sander, Joerg (Computing Science)
Department of Computing Science

Date accepted
Graduation date
2017-06:Spring 2017
Doctor of Philosophy
Degree level
Co-embedding is the process of mapping elements from multiple sets into a common latent space, which can be exploited to infer element-wise associations by considering the geometric proximity of their embeddings. Such an approach underlies the state of the art for link prediction, relation learning, multi-label tagging, relevance retrieval and ranking. This dissertation provides contributions to the study of co-embedding for solving association problems. First, a unifying view for solving association problems with co-embedding is presented, which covers both alignment-based and distance-based models. Although current approaches rely on local training methods applied to non-convex formulations, I demonstrate how general convex formulations can be achieved for co-embedding. I then empirically compare convex versus non-convex formulations of the training problem under an alignment model. Surprisingly, the empirical results reveal that, in most cases, the two are equivalent. Second, the connection between metric learning and co-embedding is investigated. I show that heterogeneous metric learning can be cast as distance-based co-embedding, and propose a scalable algorithm for solving the training problem globally. The co-embedding framework allows metric learning to be applied to a wide range of association problems---including link prediction, relation learning, multi-label tagging and ranking. I investigate the relation between the standard non-convex training formulation and the proposed convex reformulation of heterogeneous metric learning, both empirically and analytically. Again, it is discovered that under certain conditions, the objective values achieved by the two approaches are identical. I develop a formal characterization of the conditions under which this equality holds. Finally, a constrained form of co-embedding is proposed for structured output prediction. A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming. Rather than using approximate inference or tailoring a specialized inference method for a particular structure I instead pre-compile prediction constraints directly into the learned representation. By eliminating the need for explicit inference a more scalable approach to structured output prediction can be achieved, particularly at test time. I demonstrate the idea for hierarchical multi-label prediction under subsumption and mutual exclusion constraints, where a relationship to maximum margin structured output prediction can be established. Experiments demonstrate that the benefits of structured output training can still be realized even after inference has been eliminated.
This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for the purpose of private, scholarly or scientific research. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
Citation for previous publication
Mirzazadeh, F., Ravanbakhsh, S., Ding, N. and Schuurmans, D. (2015) Embedding inference for structured multilabel predicton. In Advances in Neural Information Processing Systems (NIPS-15).Mirzazadeh, F., White, M., Gyorgy, A. and Schuurmans, D. (2015) Scalable metric learning for co-embedding. In European Conference on Machine Learning (ECML-15).Mirzazadeh, F., Guo, Y. and Schuurmans, D. (2014) Convex co-embedding. In Twenty-Eighth Annual Conference on Artificial Intelligence (AAAI-14).

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 2384739
Last modified: 2017:06:13 12:22:11-06:00
Filename: Mirzazadeh_Farzaneh_201704_PhD.pdf
Original checksum: ab0b84e296b0e6ae328afe5cdef9ccfb
Well formed: true
Valid: true
File title: Introduction
Page count: 111
Activity of users you follow
User Activity Date