Usage
  • 46 views
  • 156 downloads

Deep Interpretable Modelling

  • Author / Creator
    Babiker, Housam Khalifa Bashier
  • The recent success of deep neural networks has exposed the problem of model transparency.
    The need for explainability is particularly critical in sensitive domains. In addition, regulatory frameworks for the “responsible” deployment of AI are emerging,
    creating legal requirements for transparent, explainable models.
    There are many approaches to explainability, including the distinction between
    top-down methods. Such as adapting existing logical models of explainability to deep
    learning and bottom-up methods (e.g., augmenting the “semantics-free” networks
    with fragments of semantic components). However, there is the challenge of how a
    deep network can learn multi-level representations or create explanation support on
    demand when requested. Here we describe our development and experiments with
    building interpretable deep neural networks for Natural Language Processing (NLP).
    We focus on learning interpretable representations to generate reliable explanations
    that give users a deeper understanding of the model’s behavior. These representations
    offer feature attribution, contrastive, and hierarchical explanations. We also show the
    effectiveness of our approach to model distillation and rationale extraction.

  • Subjects / Keywords
  • Graduation date
    Spring 2023
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/r3-sz24-7p61
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.