Usage
  • 246 views
  • 402 downloads

Regularized factor models

  • Author / Creator
    White, Martha
  • This dissertation explores regularized factor models as a simple unification of machine learn- ing problems, with a focus on algorithmic development within this known formalism. The main contributions are (1) the development of generic, efficient algorithms for a subclass of regularized factorizations and (2) new unifications that facilitate application of these algorithms to problems previously without known tractable algorithms. Concurrently, the generality of the formalism is further demonstrated with a thorough summary of known, but often scattered, connections between supervised and unsupervised learning problems and algorithms.
    The dissertation first presents the main algorithmic advances: convex reformulations of non- convex regularized factorization objectives. A convex reformulation is developed for a general subset of regularized factor models, with an efficiently computable optimization for five different regularization choices. The thesis then describes advances using these generic convex reformulation techniques in three important problems: multi-view subspace learning, semi-supervised learn- ing and estimating autoregressive moving average models. These novel settings are unified under regularized factor models by incorporating problem properties in terms of regularization. Once ex- pressed as regularized factor models, we can take advantage of the convex reformulation techniques to obtain novel algorithms that produce global solutions. These advances include the first global estimation procedure for two-view subspace learning and for autoregressive moving average models. The simple algorithms obtained from these general convex reformulation techniques are empirically shown to be effective across these three problems on a variety of datasets.
    This dissertation illustrates that many problems can be specified as a simple regularized factorization, that this class is amenable to global optimization and that it is advantageous to represent machine learning problems as regularized factor models.

  • Subjects / Keywords
  • Graduation date
    Spring 2015
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/R32V2CJ50
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.