Usage
  • 286 views
  • 294 downloads

Automatic step-size adaptation in incremental supervised learning

  • Author / Creator
    Mahmood, Ashique
  • Performance and stability of many iterative algorithms such as stochastic gradient descent largely depend on a fixed and scalar step-size parameter. Use of a fixed and scalar step-size value may lead to limited performance in many problems. We study several existing step-size adaptation algorithms in nonstationary, supervised learning problems using simulated and real-world data. We discover that effectiveness of the existing step-size adaptation algorithms requires tuning of a meta parameter across problems. We introduce a new algorithm - Autostep - by combining several new techniques with an existing algorithm, and demonstrate that it can effectively adapt a vector step-size parameter on all of our training and test problems without tuning its meta parameter across them. Autostep is the first step-size adaptation algorithm that can be used in widely different problems with the same setting of all of its parameters.

  • Subjects / Keywords
  • Graduation date
    Fall 2010
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/R31G9D
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
  • Language
    English
  • Institution
    University of Alberta
  • Degree level
    Master's
  • Department
  • Supervisor / co-supervisor and their department(s)
  • Examining committee members and their departments
    • Schuurmans, Dale (Computing Science)
    • Sutton, Richard S. (Computing Science)
    • Shah, Sirish L. (Chemical and Materials Engineering)