Usage
  • 186 views
  • 210 downloads

Towards Building Coherent and Faithful Conversational Models

  • Author / Creator
    Dziri, Nouha
  • Dialogue systems powered by large pre-trained language models exhibit an innate ability to deliver fluent and natural-sounding responses. Despite their impressive performance, these models fail to conduct interesting and consistent exchanges of turns and can often generate factually incorrect statements, known as "hallucinations", impeding their widespread adoption in real-world applications. These issues seem not to be rectified by simply training autoregressive neural models on a massive amount of Web data and then fine-tuning on a specific dialogue benchmark. Progress towards models that do not exhibit these issues requires evaluation metrics that can quantify their prevalence. Unfortunately, there is a significant progress in models' architectures without a significant progress on how they are being evaluated. What is more, current metrics capture mostly surface-level improvements (e.g., human-likeness) and fail dramatically at measuring a deep understanding of attribution.This dissertation aims at building coherent and faithful conversational models by addressing existing problems from three perspectives: modelling, data and evaluation. First, I introduce DEMI, a new objective function, which aims to make responses more coherent, interesting and diverse. DEMI focuses on maximizing mutual information between past and known future utterances of a particular turn. This is done by applying the chain rule on mutual information and bounding each term separately.Second, I present Neural Path Hunter (NPH), which follows a generate-then-refine approach by augmenting conventionalconversational models with an additional refinement stage enabling them to correct potential hallucinations by querying a knowledge graph. Third, I introduce the BEGIN benchmark designed to evaluate attribution in knowledge-grounded dialogue systems. Through a comprehensive evaluation study on BEGIN, I show that a broad set of existing automatic metrics do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. And lastly, I discuss the origin of hallucinations in conversational models and link that to noise in dialogue benchmarks and to modelling weaknesses. To address this problem, I follow a data centric approach and introduce a new benchmark, FaithDial, which drastically enhances faithfulness and other dialogue qualities.Overall, in pursuit of building trustworthy conversational models that can be readily adopted in real-world applications, the present thesis highlights (1) how to embed human-like conversational properties in responses (2) how to make responses more faithful and less hallucinated (3) how to reliably evaluate faithfulness.

  • Subjects / Keywords
  • Graduation date
    Spring 2023
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/r3-33q2-wx48
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.