Response Generation For An Open-Ended Conversational Agent

  • Author / Creator
    Dziri, Nouha
  • Conversation plays a key role in maintaining humans well-being. It constitutes the most natural way of interacting verbally with each other. Over the past decade, dialogue systems have become omnipresent in our daily lives, assisting our daily schedule and routine. Recently, the emergence of neural network models has shown promising results in solving problems such as scalability and language-independence that conventional dialogue system fail to cope with.
    In particular, Sequence-to-Sequence (Seq2Seq) models have witnessed a notable success in generating natural conversational exchanges by sampling words sequentially conditioned on previous words. However, these models still lag far behind human capabilities in terms of the conversations that they can perform. Notwithstanding the syntactically well-formed responses generated by Seq2Seq models, they are prone to be generic, dull and off-context such as "i don't know" or " i'm not sure what you're talking about".
    In this work, we introduce a Topical Hierarchical Recurrent Encoder Decoder (THRED), a novel, fully data-driven, multi-turn response generation system intended to produce contextual and topic-aware responses.
    Our model is built upon the basic Seq2Seq model by augmenting it with a hierarchical joint attention mechanism that incorporates topical concepts and previous interactions into the response generation. We demonstrate that incorporating conversation history and topic information with our novel method improves generated conversational responses. To train our model, we provide a clean and high-quality conversational dataset mined from Reddit comments. Additionally, we propose two novel quantitative metrics for measuring the quality of the generated responses, dubbed Semantic Coherence and Response Echo Index. Our experiments on these quantitative metrics along with human evaluation demonstrate that the proposed model is able to generate more diverse and contextually relevant responses compared to the strong baselines.
    In contrast to the widely used OpenSubtitles dataset, we exhibit that Reddit dataset can be considered as a better resource for training future conversational systems.
    Furthermore, we show that both quantitative metrics agree reasonably with human judgment, making a step towards a good automatic evaluation procedure.

  • Subjects / Keywords
  • Graduation date
    Fall 2018
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.