Usage
  • 138 views
  • 328 downloads

Evolving Recurrent Neural Networks for Emergent Communication

  • Author / Creator
    Sirota, Joshua
  • Emergent communication is a framework for machine language acquisition that has recently been utilized to train deep neural networks to develop shared languages from scratch and use these languages to communicate and cooperate. Previous work on emergent communication has utilized gradient-based learning. Recent advances in gradient-free evolutionary computation, though, provide an alternative approach for training deep neural networks which could be beneficial for emergent communication. Certain evolutionary algorithms have been shown to be robust to misleading gradients, which can present a problem in cooperative communication tasks. Additionally, some evolutionary algorithms have been shown to train quickly and require only CPUs, rather than the GPUs needed for gradient-based training.

    This thesis addresses the question of whether or not a gradient-free evolutionary approach can be used as a training methodology for emergent communication amongst deep neural networks. The evolutionary approach that we use consists of a genetic algorithm to search for both the weights and architectures of these networks. We adapt evolutionary techniques which have previously been used to evolve individual agents so as to co-evolve pairs of agents which develop languages to play a repeated referential game. We empirically demonstrate that agents trained solely with evolution perform well above a random chance baseline, although our performance is worse than that previously achieved with gradient-based reinforcement learning. We show that evolving the architecture of these agents can improve their ability to perform cooperative communication-based tasks when compared to utilization of a fixed architecture. The main contribution of this thesis is to show that an evolutionary approach can be used to train agents to communicate and suggests that these techniques could be useful for future research on cooperative multi-agent problems involving deep neural networks.

  • Subjects / Keywords
  • Graduation date
    Fall 2019
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-wz2h-xt20
  • License
    Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.