Usage
  • 115 views
  • 184 downloads

Probing the Robustness of Pre-trained Language Models for Structured and Unstructured Entity Matching

  • Author / Creator
    Akbarian Rastaghi, Mehdi
  • The paradigm of fine-tuning Pre-trained Language Models (PLMs) has been successful in Entity Matching (EM). Many contemporary works leverage PLM-based models to push the state of the results. However, using the power of transformer-based models has some downsides in this task. Despite their remarkable performance, PLMs exhibit a tendency to learn spurious correlations from training data. In this thesis, we aim to investigate whether PLM-based EM models can be trusted in real-world applications where data distribution is different from that of training. To this end, we design an evaluation benchmark to assess the robustness of structured EM models to facilitate their deployment in real-world settings. Then, we prescribe simple modifications that can improve the robustness of PLM-based EM models for structured and unstructured data. Also, to evaluate the model's performance on unstructured entity matching, we develop a new unstructured matching dataset. We extend our experiments further to study the effect of deep classifiers, data augmentation, and loss function on the model's robustness. Our experiments show that while yielding superior results for in-domain generalization, our proposed model significantly improves model robustness compared to state-of-the-art EM models.

  • Subjects / Keywords
  • Graduation date
    Spring 2023
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-rfmv-yb70
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.