Distributed Estimation and Quantization Algorithms for Wireless Sensor Networks

  • Author / Creator
    Sahar, Movaghati
  • In distributed sensing systems, measurements from a random process or parameter are usually not available in one place. Also, the processing resources are distributed over the network. This distributed characteristic of such sensing systems demands for special attention when an estimation or inference task needs to be done. In contrast to a centralized case, where the raw measurements are transmitted to a fusion centre for processing, distributed processing resources can be used for some local processing, such as data compression or estimation according to distributed quantization or estimation algorithms. Wireless sensor networks (WSNs) consist of small sensor devices with limited power and processing capability, which cooperate through wireless transmission, in order to fulfill a common task. These networks are currently employed on land, underground, and underwater, in a wide range of applications including environmental sensing, industrial and structural monitoring, medical care, etc. However, there are still many impediments that hold back these networks from being pervasive, some of which are characteristics of WSNs, such as scarcity of energy and bandwidth resources and limited processing and storage capability of sensor nodes. Therefore, many challenges still need to be overcome before WSNs can be extensively employed. In this study, we concentrate on developing algorithms that are useful for estimation tasks in distributed sensing systems, such as wireless sensor networks. In designing these algorithms we consider the special constraints and characteristics of such systems, i.e., distributed nature of the measurements and the processing resources, as well as the limited energy of wireless and often small devices. We first investigate a general stochastic inference problem. We design a non-parametric algorithm for tracking a random process using distributed and noisy measurements. Next, we narrow down the problem to the distributed parameter estimation, and design distributed quantizers to compress measurement data while maintaining an accurate estimation of the unknown parameter. The contributions of this thesis are as follows. In Chapter 3, we design an algorithm for the distributed inference problem. We first use factor graphs to model the stochastic dependencies among the variables involved in the problem and factorize the global inference problem to a number of local dependencies. A message passing algorithm called the sum-product algorithm is then used on the factor graph to determine local computations and data exchanges that must be performed by the sensing devices in order to achieve the estimation goal. To tackle the nonlinearities in the problem, we combine the particle filtering and Monte-Carlo sampling in the sum-product algorithm and develop a distributed non-parametric solution for the general nonlinear inference problems. We apply our algorithm to the problem of distributed target tracking and show that even with a few number of particles the algorithm can efficiently track the target. In the next three chapters of the thesis, we focus on the distributed parameter quantization under energy limitations. In such problems, each sensor device sends a compressed version of its noisy observation of the same parameter to the fusion centre, where the parameter is estimated from the received data. In Chapter 4, we design a set of local quantizers that quantize each sensor's measurement to a few bits. We optimize the quantizers' design by maximizing the mutual information of the quantized data and the unknown parameter. At the fusion centre, we design the appropriate estimator that incorporates the compressed data from all sensors to estimate the parameter. For very stringent energy constraints, in Chapter 5, we focus on the binary quantization, where each sensor quantizes its data to exactly one bit. We find a set of local binary quantizers that jointly quantize the unknown variable with high precision. In the fusion centre, a maximum likelihood decoder is designed to estimate the parameter from the received bits. In Chapter 6, for an inhomogeneous scenario, where measurements have different signal-to-noise ratios, we find the best sensor-to-quantizer assignment that minimizes the estimation error, using the Hungarian algorithm.

  • Subjects / Keywords
  • Graduation date
  • Type of Item
  • Degree
    Doctor of Philosophy
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
  • Language
  • Institution
    University of Alberta
  • Degree level
  • Department
    • Department of Electrical and Computer Engineering
  • Specialization
    • Communications
  • Supervisor / co-supervisor and their department(s)
    • Ardakani, Masoud (Electrical and Computer Engineering)
  • Examining committee members and their departments
    • Jing, Yindi (Electrical and Computer Engineering)
    • Tellambura, Chintha (Electrical and Computer Engineering)
    • Musilek, Petr (Electrical and Computer Engineering)
    • Beheshti, Soosan (Electrical and Computer Engineering, Ryerson University)