Usage
  • 71 views
  • 93 downloads

Using Associative Classifiers as a Surrogate for Explainability in Text Categorization

  • Author / Creator
    Alam Anik, Md Tanvir
  • Giving reasons for justifying the decisions made by classification models has received less attention in recent artificial intelligence breakthroughs than improving the accuracy of the models. Recently, AI researchers are paying more attention to filling this gap, leading to the introduction of the emerging topic of explainable AI (XAI). XAI is a field of artificial intelligence that aims to create more transparent and understandable AI systems. A form of XAI approach called ``model-independent explanations" aims to offer explanations without requiring the internals of a trained model.

    This study presents BARBE for text, a technique that can explain the decisions made by any black-box classifier for text datasets with a high degree of precision, without relying on information about the internal architecture of the model. A probability score from the black-box classifier is not necessary in order to use BARBE. In addition, BARBE offers explanations in two distinct formats: firstly, the generation of rules; secondly, the importance score for salient features. Because they more closely match human intuition, rules are seen to be a superior explanation method. Additionally, BARBE makes use of association rules. An association rule is in the form of ``if X then Y," which means that if X occurs, then the likelihood of Y occurring increases as well. BARBE not only provides a single rule but also provides a set of rules where each rule may contain a conjunction of features. In this study, we introduce two different versions of BARBE and illustrate their capability to effectively generate explanations for sentences of varying lengths. We propose a data augmentation technique for BARBE that can generate more meaningful rules as the explanation.

    Our study demonstrates the effectiveness of BARBE in generating explanations for detecting cyberbullying in the context of a resource-constrained language. The experimental analysis shows that BARBE outperforms other XAI frameworks in generating more convincing explanations for resource-constrained language. This is a significant finding as it demonstrates the potential of BARBE as a tool for improving the explainability of machine learning models trained using formal and informal embedding, even in contexts where data is limited or constrained.

  • Subjects / Keywords
  • Graduation date
    Fall 2023
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-r42g-k130
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.