This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Theses and Dissertations
This collection contains theses and dissertations of graduate students of the University of Alberta. The collection contains a very large number of theses electronically available that were granted from 1947 to 2009, 90% of theses granted from 2009-2014, and 100% of theses granted from April 2014 to the present (as long as the theses are not under temporary embargo by agreement with the Faculty of Graduate and Postdoctoral Studies). IMPORTANT NOTE: To conduct a comprehensive search of all UofA theses granted and in University of Alberta Libraries collections, search the library catalogue at www.library.ualberta.ca - you may search by Author, Title, Keyword, or search by Department.
To retrieve all theses and dissertations associated with a specific department from the library catalogue, choose 'Advanced' and keyword search "university of alberta dept of english" OR "university of alberta department of english" (for example). Past graduates who wish to have their thesis or dissertation added to this collection can contact us at erahelp@ualberta.ca.
Items in this Collection
- 1Karami, Mahdi
- 1Kuan, Li-Hao
- 1Shariff, Roshan
- 1Shi, Jichuan
- 1Solinas, Christopher
- 1Vega Romero, Roberto Ivan
- 1Artificial Intelligence
- 1Artificial intelligence
- 1Bandits
- 1Computing Science
- 1Deep learning
- 1Domain adaptation
-
Spring 2015
Sampling from a given probability distribution is a key problem in many different disciplines. Markov chain Monte Carlo (MCMC) algorithms approach this problem by constructing a random walk governed by a specially constructed transition probability distribution. As the random walk progresses, the
distribution of its states converges to the required target distribution. The Metropolis-Hastings (MH) algorithm is a generally applicable MCMC method which, given a proposal distribution, modifies it by adding an accept/reject step: it proposes a new state based on the proposal distribution and the existing
state of the random walk, then either accepts or rejects it with a certain probability; if it is rejected, the old state is retained. The MH algorithm is most effective when the proposal distribution closely matches the target distribution: otherwise most proposals will be rejected and convergence to
-
Spring 2019
predict which cards opponents are holding based on the cards that have been played so far. Inference is crucial for the performance of algorithms that use determinization because it allows states to be sampled according to a better estimate of the true state probability distribution in the information set
handling the larger input feature spaces associated with a richer state representation, and lastly, I explain how to combine these predictions to estimate the probability distribution of states within an information set and improve determinized search techniques — leading to a new state-of-the-art in
imperfect information games. However, these works have largely neglected another important part of the equation: inference. Inference involves estimating the state probability distribution of an information set using state information like past opponent actions. It lets players of trick-taking card games
-
Machine learning for medical applications with limited data: Incorporating domain expertise and addressing domain-shift
DownloadFall 2022
labels used during training, and (3) Differences between the distributions that generated the training and test data. This dissertation focuses on strategies for effectively applying machine learning under these circumstances.For learning models from a limited number of labeled instances, we propose
the labels, we use probabilistic graphical models. Instead of providing a point-estimate, probabilistic models predict an entire probability distribution, which accounts for the uncertainty in the data. Probabilistic models are a key component of the probabilistic labels mentioned above, and they also
the data used during inference -- in particular, the test set might not follow the same probability distribution that generated the training data. This means that a predictor learned from one dataset might do poorly when applied to a second dataset. This problem is known as batch effects or dataset
-
Comparing Parameterization Methods for Loss-Based Discrete-Time Individual Survival Prediction Models
DownloadFall 2023
survival time for some patients. In general, an ISD model maps each patient x to his/her survival distribution, which is the probability that patient x will survive until time t, for each t > 0. We focus on discrete-time ISD models, which partition the future time into multiple time intervals and then
Given a patient's description, a survival prediction model estimates that patient's survival time. We consider the challenge of learning an individual survival distribution (ISD) model from a dataset that includes censored training instances – i.e., data that provides only the lower bound of the
apply machine learned regressors to estimate the survival probability in each time interval. These discrete-time ISD models can usually use fewer parameters than continuous models to describe different shapes of survival distributions by discretizing the survival time. We compare four survival models
-
Adaptive local threshold with shape information and its application to oil sand image segmentation
DownloadSpring 2010
. Shape attribute distributions are learned from typical objects in ground truth images. Local threshold for each object in an image to be segmented is chosen to maximize probabilities in these shape attributes distributions. Then for the application of the oil sand image segmentation, a supervised
This thesis is concerned with a novel local threshold segmentation algorithm for digital images incorporating shape information. In image segmentation, most local threshold algorithms are based only on intensity analysis. In many applications where an image contains objects with a similar shape, in
addition to the intensity information, some prior known shape attributes could be exploited to improve the segmentation. The goal of this work is to design a local threshold algorithm that includes shape information to enhance the segmentation quality. The algorithm adaptively selects a local threshold
-
Spring 2011
Unit (GPU) on graphic cards has enabled us to develop real-time interactive simulators of complex physical phenomenon. In this thesis, two GPU-based implementations of interactive physical simulations are presented: (1) visualization of the electron probability distribution of a hydrogen atom, (2
) visualization and simulation of particle based fluid dynamic model using smoothed particle hydrodynamics. These simulations were developed in the context of the Microscopic and Subatomic Visualization (MASAV) project as a demonstration of the capabilities of the GPU to create realistic interactive physical
-
Spring 2011
mechanisms investigated for what are called uncertain databases or probabilistic databases, where a tuple is associated with a membership probability indicating the level of confidence on the stored information. In this thesis, we study top-k ranking with uncertain data in two general areas. The first is on
show experimentally that pruning can generate orders of magnitude performance gains. In the second area of our investigation, we study the problem of top-k ranking for objects with multiple attributes whose values are modeled by probability distributions and constraints. We formulate a theory of top-k
The goal of top-k ranking is to rank individuals so that the best k of them can be determined. Depending on the application domain, an individual can be a person, a product, an event, or just a collection of data or information for which an ordering makes sense. In the context of databases, top-k
-
Fall 2016
In an online learning problem a player makes decisions in a sequential manner. In each round, the player receives some reward that depends on his action and an outcome generated by the environment while some feedback information about the outcome is revealed. The goal of the player can be various
few probes as possible. Then we study the side observation model in the regret minimization scenario. We derive a novel finite time distribution dependent lower bound and design asymptotically optimal and minimax optimal algorithms. Last we investigate the conservative bandit problem where the
. In this thesis we investigate several variants of online learning problems with different feedback models and objectives. First we consider the pure exploration problem with multi-action probes. We design algorithms that can find the best one or several actions with high probability while using as
-
Advances in Probabilistic Generative Models: Normalizing Flows, Multi-View Learning, and Linear Dynamical Systems
DownloadFall 2020
applicable method to construct complex probability density. Herein, I investigate a set of invertible convolutional flows based on the circular and symmetric convolutions with efficient Jacobian determinant computation and inverse mapping (deconvolution) in 𝒪(𝑁 log𝑁) time. Further, an analytic approach to
representation and a set of view-specific factors. To approximate the posterior distribution of the latent probabilistic multi-view layer, a variational inference approach is developed that results in a scalable algorithm for training deep generative multi-view neural networks. Empirical studies confirm that the
the second part, a deep generative framework is expanded to multi-view learning. This model is composed of a linear probabilistic multi-view layer in the latent space in conjunction with deep generative networks as observation models where the variations of each view is captured by a shared latent