Usage
  • 261 views
  • 254 downloads

Appearance SLAM in Changing Illumination Environment

  • Author / Creator
    Liu, Yang
  • With the rapid development in visual sensors such as monocular vision, appearance-based robot simultaneous localization and mapping (SLAM) has become an open research topic in robotics. In appearance SLAM, a robot uses the visual appearance of locations (i.e., the images) acquired along its route to build a map of the environment and localizes itself by recognizing the places it has visited before. In this thesis, we address several issues in the current appearance SLAM techniques, with the intention to develop a systematic approach for SLAM under significant illumination change – a typical scenario in long-term mapping. Instead of using traditional Bag-of-Words (BoW) image descriptor in comparing the appearance of locations, we use visual features directly to solve the perceptual aliasing that may particularly happen in illumination change caused partially by vector quantization of feature descriptors in image encoding. Efficient data structures such as k-d tree or random k-d forests are exploited to speed up the feature matching with approximate nearest neighbor search to ensure real-time robot exploration, without sacrificing performance at the level of matching locations. In order to deal with the cases in which local features do not work well, for example, in the environment with significant illumination variance where feature repeatability is not guaranteed, we propose to use a whole-image descriptor which is a low dimensional compact representation of image responses to a bank of filters incorporating the structural information (e.g. the edges) of an image to describe the appearance and measure similarities among locations. PCA is employed to transform a high dimensional gist descriptor to a lower dimensional form to improve both computational efficiency and discriminating power of the descriptor. In addition, we use a particle filter to exploit the correlation among images in a sequence captured by the robot in the process of identifying loop closure candidates, making the algorithm highly scalable due to both the compactness of image descriptor and simplicity of particle filtering. Based on the above methods, our final component of the SLAM system is a novel feature matching method for multi-view geometry (MVG) based verification of loop closures in illumination change. To develop such a method that serves as the prerequisite of verification, we exploit the particular camera motion in our application to illustrate that spatial constraint of matching features (or keypoints) derived from optical flow statistics can be used as an important basis in finding true matches. Particularly, by assuming a weak perspective camera model and planar camera motion, we derive a simple constraint on correctly matched keypoints in terms of the flow vectors between two images. We then use this constraint to prune the putative matches to boost the inlier ratio significantly thereby giving the subsequent verification algorithm a chance to succeed.

  • Subjects / Keywords
  • Graduation date
    Spring 2016
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/R3BZ61H1H
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
  • Language
    English
  • Institution
    University of Alberta
  • Degree level
    Doctoral
  • Department
  • Supervisor / co-supervisor and their department(s)
  • Examining committee members and their departments
    • Zhang, Hong (Computing Science)
    • Jagersand, Martin (Computing Science)
    • Eustice, Ryan (Electrical Engineering and Computer Science)
    • Mueller, Martin (Computing Science)
    • Ray, Nilanjan (Computing Science)