Statistical analysis of L1-penalized linear estimation with applications

  • Author / Creator
    Ávila Pires, Bernardo
  • We study linear estimation based on perturbed data when performance is measured by a matrix norm of the expected residual error, in particular, the case in which there are many unknowns, but the “best” estimator is sparse, or has small L1-norm. We propose a Lasso-like procedure that finds the minimizer of an L1-penalized squared norm of the residual. For linear regression we show O(sqrt(1/n)) uniform bounds for the difference between the residual error norm of our estimator and that of the “best” estimator. These also hold for on-policy value function approximation in reinforcement learning. In the off-policy case, we show O(sqrt((ln n)/n)) bounds for the expected difference. Our analysis has a unique feature: it is the same for both regression and reinforcement learning. We took care to separate the deterministic and probabilistic arguments, so as to analyze a range of seemingly different linear estimation problems in a unified way.

  • Subjects / Keywords
  • Graduation date
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
  • Language
  • Institution
    University of Alberta
  • Degree level
  • Department
    • Department of Computing Science
  • Supervisor / co-supervisor and their department(s)
    • Csaba Szepesvári (Computing Science)
  • Examining committee members and their departments
    • Byron Schmuland (Mathematical and Statistical Sciences)
    • Dale Schuurmans (Computing Science)