Usage
  • 208 views
  • 491 downloads

Fixed Point Propagation: A New Way To Train Recurrent Neural Networks Using Auxiliary Variables

  • Author / Creator
    Nath, Somjit
  • Recurrent neural networks (RNNs), along with their many variants, provide a powerful tool for online prediction in partially observable problems. Two issues concerning RNNs, however, are the ability to capture long-term dependencies and long training times. There have been a variety of strategies to improve training in RNNs, particularly by approximating an algorithm called Real-Time Recurrent Learning. These strategies, however, can still be computationally expensive and focus computation on computing gradients back- in-time. In this work, we show that learning the hidden state in RNNs can be framed as a fixed-point problem. Using this formulation, we provide an asynchronous fixed-point iteration update that significantly improves run-times and stability of learning the state update.

  • Subjects / Keywords
  • Graduation date
    Fall 2019
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-evks-dm20
  • License
    Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.