Download the full-sized PDF of Developing and Evaluating Methods for Mitigating Sample Selection Bias in Machine LearningDownload the full-sized PDF


Download  |  Analytics

Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Faculty of Graduate Studies and Research


This file is not currently in any collections.

Developing and Evaluating Methods for Mitigating Sample Selection Bias in Machine Learning Open Access


Other title
Software Defect Prediction
Sample Selection Bias
Machine Learning
Software Reliability
Learning in Imbalanced Datasets
Type of item
Degree grantor
University of Alberta
Author or creator
Pelayo Ramirez, Lourdes
Supervisor and department
Dick, Scott (Electrical and Computer Engineering)
Examining committee member and department
Denzinger, Joerg (Computer Science University of Calgary)
Pedrycz, Witold (Electrical and Computer Engineering)
Sutton, Richard (Computer Science)
Reformat, Marek (Electrical and Computer Engineering)
Department of Electrical and Computer Engineering

Date accepted
Graduation date
Doctor of Philosophy
Degree level
The imbalanced learning problem occurs in a large number of economic and health domains of great importance; consequently, it has drawn a significant amount of interest from academia, industry, and government funding agencies. Several researchers have used stratification to alleviate this problem; however, it is not clear what stratification strategy is in general more effective: under-sampling, over-sampling or the combination of both. Our first topic evaluates the contribution of stratification strategies in the software defect prediction area. We study the statistical contribution of stratification in the new Mozilla dataset, a new large-scale software defect prediction dataset which includes both object-oriented metrics and a count of defects per module. Our second topic responds to the debate about the contribution of over-sampling, under-sampling and the combination of both with the employment of a full-factorial design experiment using the Analysis of Variance (ANOVA) over six software defect prediction datasets. We extend our research to develop a stratification method to mitigate sample selection bias in function approximation problems. The sample selection bias is present when the training and test instances are drawn from a different distribution, with the imbalance dataset problem considered a particular case of sample selection bias. We extend the well-known SMOTE over-sampling technique to continuous-valued response variables. Our new algorithm proves to be a valuable algorithm helping to increase the performance on function approximation problems and effectively reducing the impact of sample selection bias.
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 6239882
Last modified: 2015:10:12 20:31:08-06:00
Filename: Pelayo_Lourdes_Fall2011.pdf
Original checksum: 3ec44d1235465e99a36673c98064f0e0
Well formed: true
Valid: true
Page count: 380
File language: en-US
Activity of users you follow
User Activity Date