A Computational Model of Learning from Replayed Experience in Spatial Navigation

  • Author / Creator
    Mirian HosseinAbadi, MahdiehSadat
  • In this thesis we propose a computational model of animal behavior in spatial navigation, based on reinforcement learning ideas. In the field of computer science and specifically artificial intelligence, replay refers to retrieving and reprocessing the experiences that are stored in an abstract representation of the environment. Our model uses the replay idea that existed separately in both computer science and neuroscience. In neuroscience, it refers to the reactivation of neurons in the hippocampus that were previously active during a learning task, in such a way that can be interpreted as replaying previous experiences. Therefore, it is natural to use RL algorithms to model the biological replay phenomena.
    We illustrated, through computational experiments, that our replay model can explain many previously hard-to-explain behavioral navigational experiments such as latent learning or insight experiments. There have been many computational models proposed to model rats behavior in mazes or open field environments. We showed that our model has two major advantages over prior ones: (i) The learning algorithm used in our model is simpler than that of previous computational models, yet capable of explaining complicated behavioral phenomena in spatial navigation. (ii) our model generates different replay sequences that are consistent with replay patterns observed in the neural experiments on the rat brain.

  • Subjects / Keywords
  • Graduation date
    Spring 2012
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.