Usage
  • 362 views
  • 618 downloads

Instance-dependent analysis of learning algorithms

  • Author / Creator
    Huang, Ruitong
  • On the one hand, theoretical analyses of machine learning algorithms are typically performed based on various probabilistic assumptions about the data. While these probabilistic assumptions are important in the analyses, it is debatable whether such assumptions actually hold in practice. Another question is whether these probabilistic assumptions really catch the essence of "learning" as is implicitly assumed since the introduction of PAC models in learning theory. On the other hand, when avoiding making assumptions about the data, typical analyses tend to follow a worst-case minimax approach, e.g. in the adversarial online learning framework. Oftentimes the results obtained will fail to catch and exploit the niceness' of the data that may help speeding up the learning. It is also debatable whether the data encountered in typical learning scenarios would ever be truly adversarial. Motivated by the above issues, this thesis suggests to perform instance-dependent analysis of learning algorithms to improve our understanding of learning. Special emphasis is put on characterizing theniceness' of data from the perspective of learning. In this thesis, we demonstrate this approach in three settings: 1. In the unsupervised learning setting, we redefine the problem of independent component analysis (ICA) to avoid any kind of stochastic assumptions and develop (for the first time) a provably polynomial-time learning algorithm based on our deterministic analysis. 2. In the supervised learning setting, we start with a statistical framework: We analyze the finite sample performances of the empirical risk minimization algorithm (ERM) for a generalized partially linear model under the random design setting. We detect a potential deficiency of the ERM algorithm. Further investigation leads to a high probability instance-dependent generalization bound. 3. Finally, in the online learning setting, we take a thorough analysis of the follow the leader (FTL) algorithm in the online linear prediction problem, and discover a broad range of previously unknown favourable conditions of the data under which FTL achieves a fast learning rate. Our approach leads to various instance-dependent results that are more general, expressive, and meaningful, in the sense that these results are able to catch the important factors of the data on which the performances of the learning algorithms heavily rely.

  • Subjects / Keywords
  • Graduation date
    Fall 2017
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/R39G5GT4V
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.