ERA

Download the full-sized PDF of Large-scale semi-supervised learning for natural language processingDownload the full-sized PDF

Analytics

Share

Permanent link (DOI): https://doi.org/10.7939/R31D1P

Download

Export to: EndNote  |  Zotero  |  Mendeley

Communities

This file is in the following communities:

Graduate Studies and Research, Faculty of

Collections

This file is in the following collections:

Theses and Dissertations

Large-scale semi-supervised learning for natural language processing Open Access

Descriptions

Other title
Subject/Keyword
semi-supervised learning
computational linguistics
non-anaphoric pronoun
NLP
non-referential pronoun
string similarity
selectional preference
pleonastic pronoun
web-scale N-gram
natural language processing
Type of item
Thesis
Degree grantor
University of Alberta
Author or creator
Bergsma, Shane A
Supervisor and department
Lin, Dekang (Computing Science)
Goebel, Randy (Computing Science)
Examining committee member and department
Kondrak, Greg (Computing Science)
Hovy, Eduard (Information Sciences Institute, University of Southern California)
Schuurmans, Dale (Computing Science)
Westbury, Chris (Psychology)
Department
Department of Computing Science
Specialization

Date accepted
2010-09-29T22:41:02Z
Graduation date
2010-11
Degree
Doctor of Philosophy
Degree level
Doctoral
Abstract
Natural Language Processing (NLP) develops computational approaches to processing language data. Supervised machine learning has become the dominant methodology of modern NLP. The performance of a supervised NLP system crucially depends on the amount of data available for training. In the standard supervised framework, if a sequence of words was not encountered in the training set, the system can only guess at its label at test time. The cost of producing labeled training examples is a bottleneck for current NLP technology. On the other hand, a vast quantity of unlabeled data is freely available. This dissertation proposes effective, efficient, versatile methodologies for 1) extracting useful information from very large (potentially web-scale) volumes of unlabeled data and 2) combining such information with standard supervised machine learning for NLP. We demonstrate novel ways to exploit unlabeled data, we scale these approaches to make use of all the text on the web, and we show improvements on a variety of challenging NLP tasks. This combination of learning from both labeled and unlabeled data is often referred to as semi-supervised learning. Although lacking manually-provided labels, the statistics of unlabeled patterns can often distinguish the correct label for an ambiguous test instance. In the first part of this dissertation, we propose to use the counts of unlabeled patterns as features in supervised classifiers, with these classifiers trained on varying amounts of labeled data. We propose a general approach for integrating information from multiple, overlapping sequences of context for lexical disambiguation problems. We also show how standard machine learning algorithms can be modified to incorporate a particular kind of prior knowledge: knowledge of effective weightings for count-based features. We also evaluate performance within and across domains for two generation and two analysis tasks, assessing the impact of combining web-scale counts with conventional features. In the second part of this dissertation, rather than using the aggregate statistics as features, we propose to use them to generate labeled training examples. By automatically labeling a large number of examples, we can train powerful discriminative models, leveraging fine-grained features of input words.
Language
English
DOI
doi:10.7939/R31D1P
Rights
License granted by Shane Bergsma (sbergsma@ualberta.ca) on 2010-09-26T23:22:59Z (GMT): Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of the above terms. The author reserves all other publication and other rights in association with the copyright in the thesis, and except as herein provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication

File Details

Date Uploaded
Date Modified
2014-04-24T23:03:25.195+00:00
Audit Status
Audits have not yet been run on this file.
Characterization
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 937353
Last modified: 2015:10:12 12:07:20-06:00
Filename: dissertation.Bergsma.pdf
Original checksum: e67c1c38710c3dc9c39dce0230c84cf1
Well formed: true
Valid: true
File title: dissertation.dvi
Page count: 137
Activity of users you follow
User Activity Date