ERA

Download the full-sized PDF of Fast gradient algorithms for structured sparsityDownload the full-sized PDF

Analytics

Share

Permanent link (DOI): https://doi.org/10.7939/R37H1DT5Q

Download

Export to: EndNote  |  Zotero  |  Mendeley

Communities

This file is in the following communities:

Graduate Studies and Research, Faculty of

Collections

This file is in the following collections:

Theses and Dissertations

Fast gradient algorithms for structured sparsity Open Access

Descriptions

Other title
Subject/Keyword
generalized conditional gradient
prox-decomposition
gradient algorithm
proximal average
structured sparsity
proximal map
Type of item
Thesis
Degree grantor
University of Alberta
Author or creator
Yu, Yaoliang
Supervisor and department
Schuurmans, Dale (Computing Science)
Szepesvari, Csaba (Computing Science)
Examining committee member and department
Bach, Francis (Computer Science Laboratory, Ecole Normale Superieure)
Gyorgy, Andras (Computing Science)
Salavatipour, Mohammad (Computing Science)
Sacchi, Mauricio (Physics)
Department
Department of Computing Science
Specialization
Statistical machine learning
Date accepted
2013-11-15T11:13:27Z
Graduation date
2014-06
Degree
Doctor of Philosophy
Degree level
Doctoral
Abstract
Many machine learning problems can be formulated under the composite minimization framework which usually involves a smooth loss function and a nonsmooth regularizer. A lot of algorithms have thus been proposed and the main focus has been on first order gradient methods, due to their applicability in very large scale application domains. A common requirement of many of these popular gradient algorithms is the access to the proximal map of the regularizer, which unfortunately may not be easily computable in scenarios such as structured sparsity. In this thesis we first identify conditions under which the proximal map of a sum of functions is simply the composition of the proximal map of each individual summand, unifying known and uncover novel results. Next, motivated by the observation that many structured sparse regularizers are merely the sum of simple functions, we consider a linear approximation of the proximal map, resulting in the so-called proximal average. Surprisingly, combining this approximation with fast gradient schemes yields strictly better convergence rates than the usual smoothing strategy, without incurring any overhead. Finally, we propose a generalization of the conditional gradient algorithm which completely abandons the proximal map but requires instead the polar---a significantly cheaper operation in certain matrix applications. We establish its convergence rate and demonstrate its superiority on some matrix problems, including matrix completion, multi-class and multi-task learning, and dictionary learning.
Language
English
DOI
doi:10.7939/R37H1DT5Q
Rights
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication
Yaoliang Yu, On Decomposing the Proximal Map, Advances in Neural Information Processing Systems 27 (NIPS), 2013.Yaoliang Yu, Better Approximation and Faster Algorithm Using the Proximal Average, Advances in Neural Information Processing Systems 27 (NIPS), 2013.Yaoliang Yu, Hao Cheng, Dale Schuurmans and Csaba Szepesvari, Characterizing the Representer Theorem, International Conference on Machine Learning (ICML), 2013.Xinhua Zhang, Yaoliang Yu and Dale Schuurmans, Accelerated Training for Matrix-norm Regularization: A Boosting Approach, Advances in Neural Information Processing Systems 26 (NIPS), 2012.

File Details

Date Uploaded
Date Modified
2014-06-15T07:13:40.863+00:00
Audit Status
Audits have not yet been run on this file.
Characterization
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 1578653
Last modified: 2015:10:12 18:59:44-06:00
Filename: Yu_Yaoliang_Spring 2014.pdf
Original checksum: d81e2253bc3d2bcfbf1f60550431914f
Well formed: false
Valid: false
Status message: Invalid page dictionary object offset=1559324
Activity of users you follow
User Activity Date