- 74 views
- 67 downloads
Debiasing Language Models using Distributionally Robust Optimization
-
- Author / Creator
- Gandhi, Deep Rajesh
-
It has been shown that pretrained language models exhibit biases and social stereotypes. Prior work on debiasing these language models has largely focussed on modifying embedding spaces in pretraining, which is not scalable for large models. Since pretrained models are typically fine-tuned on task-specific datasets for various downstream tasks, language model performance can be degraded and biases present in finetuning datasets may be amplified. We focus on the fine-tuning phase of language models rather than the computationally expensive pretraining. While training language models using traditional optimization approaches such as Empirical Risk Minimization has proven beneficial for downstream tasks, we often observe the social biases in these language models being amplified during the fine-tuning phase of the pretrained models. To counter this, we propose RobustDebias, a novel mechanism that adapts Distributionally Robust Optimization (DRO) methods for debiasing large language models in the fine-tuning phase. In this work, we focus on debiasing a model across multiple demographics while it is fine-tuned on the Masked Language Modeling (MLM) task. Our method can be used to finetune a pretrained model on any dataset and for any task. We perform extensive experiments on various language models. The results show that our method significantly mitigates bias while the impact on the language model performance is minimized.
-
- Graduation date
- Fall 2024
-
- Type of Item
- Thesis
-
- Degree
- Master of Science
-
- License
- This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.