Theses and Dissertations
This collection contains theses and dissertations of graduate students of the University of Alberta. The collection contains a very large number of theses electronically available that were granted from 1947 to 2009, 90% of theses granted from 2009-2014, and 100% of theses granted from April 2014 to the present (as long as the theses are not under temporary embargo by agreement with the Faculty of Graduate and Postdoctoral Studies). IMPORTANT NOTE: To conduct a comprehensive search of all UofA theses granted and in University of Alberta Libraries collections, search the library catalogue at www.library.ualberta.ca - you may search by Author, Title, Keyword, or search by Department.
To retrieve all theses and dissertations associated with a specific department from the library catalogue, choose 'Advanced' and keyword search "university of alberta dept of english" OR "university of alberta department of english" (for example). Past graduates who wish to have their thesis or dissertation added to this collection can contact us at erahelp@ualberta.ca.
Items in this Collection
- 2Kammammettu, Sanjula
- 1Akude, Philip J
- 1Al Hasan, Iyad
- 1Al-Haji, Ahmad
- 1Alshehri, Naeem S.
- 1Andrade Rossi, Ricardo
- 23Department of Civil and Environmental Engineering
- 14Department of Chemical and Materials Engineering
- 13Department of Electrical and Computer Engineering
- 12Department of Biological Sciences
- 9Department of Computing Science
- 9Department of Mathematical and Statistical Sciences
- 4Deutsch, Clayton (Civil and Environmental Engineering)
- 4Huang, Biao (Chemical and Materials Engineering)
- 2Boutin, Stan (Biological Sciences)
- 2Chen, Tongwen (Electrical and Computer Engineering)
- 2Hao Liang (Electrical and Computer Engineering)
- 2Li, Zukui (Department of Chemical and Materials Engineering)
Results for "Probability Distributions on a Circle"
-
Fall 2023
this thesis, we give a formula for calculating the Kantrovich distance between mod 1 probability measures. We then use this distance to study the convergence behavior of the (mod 1) empirical distributions associated with real sequences (xn)∞ n=1 for which limn→∞ n(xn−xn−1) exists. We find that for
such sequences, every probability distribution in the limit set of the empirical distributions is a rotated version of a certain exponential distribution. We also describe the speed of convergence to this limit set of distributions.
Distributions of sequences modulo one (mod 1) have been studied over the past century with applications in algebra, number theory, statistics, and computer science. For a given sequence, the weak convergence of the associated empirical distributions has been the usual approach to these studies. In
-
Fall 2018
Approximation of probability measures, quantization, Kantorovich metric, Levy metric, Kolmogorov metric, Benford's Law, slowly changing sequences, asymptotic distribution, invariance property.","This thesis is based on four papers. The first two papers fall into the field of approximation of one
upper estimate $\lt(N^{-1}\lt(\log N\rt)^{1/2}\rt)$ is obtained for the rate of convergence w.r.t. the Kantorovich metric on the circle. Moreover, a sharp rate of convergence $\lt(N^{-1}\log N\rt)$ w.r.t. the Kantorovich and the discrepancy (or Kolmogorov) metrics on the real line is derived. The last
paper proves a threshold result on the existence of a circularly invariant and uniform probability measure (CIUPM) for non-constant linear transformations on the real line, which shows that there is a constant $c$ depending only on the slope of the linear transformation such that there exists a CIUPM if
-
Fall 2010
, nonlinear relations and different qualities. Previous approaches rely on a strong Gaussian assumption or the combination of the source-specific probabilities that are individually calibrated from each data source. This dissertation develops different approaches to integrate diverse earth science data
. First approach is based on combining probability. Each of diverse data is calibrated to generate individual conditional probabilities, and they are combined by a combination model. Some existing models are reviewed and a combination model is proposed with a new weighting scheme. Weakness of the
probability combination schemes (PCS) is addressed. Alternative to the PCS, this dissertation develops a multivariate analysis technique. The method models the multivariate distributions without a parametric distribution assumption and without ad-hoc probability combination procedures. The method accounts
-
Spring 2022
(DFN) model offers a viable alternative for explicit representation of multiple fractures in the domain, where the comprising fracture properties are defined in accordance with specific probability distributions. However, even with the successful modelling of a DFN, the relationship between a set of
) interpretations, which are useful for inferring the prior probability distributions of relevant fracture parameters. A pilot point scheme and sequential indicator simulation are employed to update the distributions of fracture intensities which represent the abundance of secondary fractures (NFs) in the entire
), transmissivity of the secondary induced fracture (Tsf) and secondary fracture intensity (Psf32L), secondary fracture aperture (re), length and height (L and H), in a multifractured shale gas well in the Horn River Basin. An initial realization of the DFN model is sampled from the prior probability distributions
-
Spring 2015
Sampling from a given probability distribution is a key problem in many different disciplines. Markov chain Monte Carlo (MCMC) algorithms approach this problem by constructing a random walk governed by a specially constructed transition probability distribution. As the random walk progresses, the
distribution of its states converges to the required target distribution. The Metropolis-Hastings (MH) algorithm is a generally applicable MCMC method which, given a proposal distribution, modifies it by adding an accept/reject step: it proposes a new state based on the proposal distribution and the existing
state of the random walk, then either accepts or rejects it with a certain probability; if it is rejected, the old state is retained. The MH algorithm is most effective when the proposal distribution closely matches the target distribution: otherwise most proposals will be rejected and convergence to
-
Re-Sampling the Ensemble Kalman Filter for Improved History Matching and Characterizations of Non-Gaussian and Non-Linear Reservoir Models
DownloadSpring 2015
Gaussian variables, but it often fails to honor the reference probability distribution of the model parameters where the distribution of model parameters are non-Gaussian and the system dynamics are strongly nonlinear. In this thesis, novel sampling procedures are proposed to honor geologic information in
certain number of assimilation steps, the updated ensemble is used to generate a new ensemble that is conditional to both the geological information and the early production data. Probability field simulation and a novel probability weighted re-sampling scheme are introduce to re-sample a new ensemble
. After the re-sampling step, iterative EnKF is again applied on the ensemble members to assimilate the remaining production history. A new automated dynamic data integration workflow is implemented for characterization and uncertainty assessment of fracture reservoir models. This new methodology includes
-
Spring 2019
predict which cards opponents are holding based on the cards that have been played so far. Inference is crucial for the performance of algorithms that use determinization because it allows states to be sampled according to a better estimate of the true state probability distribution in the information set
handling the larger input feature spaces associated with a richer state representation, and lastly, I explain how to combine these predictions to estimate the probability distribution of states within an information set and improve determinized search techniques — leading to a new state-of-the-art in
imperfect information games. However, these works have largely neglected another important part of the equation: inference. Inference involves estimating the state probability distribution of an information set using state information like past opponent actions. It lets players of trick-taking card games
-
A Hybrid Fuzzy Discrete Event Simulation Framework for Analysis of Stochastic and Subjective Uncertainties in Construction Projects
DownloadSpring 2015
. Stochastic uncertainty is a system property and represents the uncertainty associated with variation of a variable. Stochastic uncertainty can be represented by a probability distribution. On the other hand, subjective uncertainty represents the lack of knowledge of the system modeller regarding the actual
DES is only able to consider stochastic uncertainty using probability distributions; and cannot handle subjective uncertainty. Fuzzy set theory provides a methodology for mathematical modelling of subjective uncertainty. Recently, fuzzy discrete event simulation (FDES) has been proposed for
considering subjective uncertainty in construction simulation models. However, the fundamental differences between fuzzy numbers and probability distributions introduce new challenges to FDES frameworks. Furthermore, subjective and stochastic uncertainties may simultaneously exist in a simulation model
-
Addressing Order Relation Issues with Constrained Radial Basis Functions and Consistent Indicator Variograms
DownloadFall 2023
bivariate distribution and shows novel equations for the calculation of probabilities of the internal bivariate distribution. Additionally, it proposes a workflow to use the equations as a tool to aid the indicator variogram modeling process. Third, it proposes a new methodology of MIK that uses the RBF
Quantifying uncertainty is a critical task of resource delineation in the mining industry. Uncertainty is used to assess risk in economic evaluation and for classification in resource reporting. The inference of local distributions from conditioning data is key to quantifying uncertainty. Multiple
indicator Kriging (MIK) is a well-established non-parametric local distribution inference technique that does not assume a prior distribution. The local conditional cumulative distribution functions (CCDF) are estimated directly from indicators defined from thresholds. MIK is flexible since allows the
-
Fall 2011
develop an algorithm for the G transformation, we derive explicit approximations to incomplete Bessel functions and tail probabilities of five probability distributions from the recursive algorithm for the G transformation, and we present all extant work on the analysis of the convergence properties of
This thesis is concerned with the development of new formulae for higher order derivatives, and the algorithmic, numerical, and analytical development of the G transformation, a method for computing infinite-range integrals. We introduce the Slevinsky-Safouhi formulae I and II with applications, we