Usage
  • 133 views
  • 160 downloads

Looking at Explainable AI Methods Through The Lens of Causality

  • Author / Creator
    Ashrafi Asli, Seyed Arad
  • With machine learning models becoming more complicated and more widely applied to solve real-world challenges, there comes the need to explain their reasoning. In parallel with the advancements of deep learning methods, Explainable AI (XAI) algorithms have been proposed to address the issue of transparency and shed some light on the decisions of black box machine learning models. Many works try to categorize and compare XAI methods to one another, but they usually provide a subjective outlook. The first contribution of this research is proposing a quantifiable approach to compare XAI methods based on causal inference.

    LIME and SHAP are two of the most popular XAI methods. The result of these two algorithms is a ranking of feature importance. In a sense, they seek to demonstrate how important a feature is in predicting the outcome. We thoroughly question this pipeline of training a black box deep learning model and then explain it afterward using XAI methods. Generating a diverse set of experiments with various causal relationships, we quantify how much the output of LIME and SHAP aligns with the causal relationships at hand. The second contribution of this work is to use our suggested quantifiable framework in action to see how aligned the output of these widely used XAI methods is according to the causal baseline.

  • Subjects / Keywords
  • Graduation date
    Spring 2023
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-tpxk-ka81
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.