Download the full-sized PDF of Appearance SLAM in Changing Illumination EnvironmentDownload the full-sized PDF



Permanent link (DOI):


Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Graduate Studies and Research, Faculty of


This file is in the following collections:

Theses and Dissertations

Appearance SLAM in Changing Illumination Environment Open Access


Other title
Place Recognition
Robot Vision
Visual Navigation
Type of item
Degree grantor
University of Alberta
Author or creator
Liu, Yang
Supervisor and department
Zhang, Hong (Computing Science)
Examining committee member and department
Mueller, Martin (Computing Science)
Ray, Nilanjan (Computing Science)
Zhang, Hong (Computing Science)
Eustice, Ryan (Electrical Engineering and Computer Science)
Jagersand, Martin (Computing Science)
Department of Computing Science

Date accepted
Graduation date
Doctor of Philosophy
Degree level
With the rapid development in visual sensors such as monocular vision, appearance-based robot simultaneous localization and mapping (SLAM) has become an open research topic in robotics. In appearance SLAM, a robot uses the visual appearance of locations (i.e., the images) acquired along its route to build a map of the environment and localizes itself by recognizing the places it has visited before. In this thesis, we address several issues in the current appearance SLAM techniques, with the intention to develop a systematic approach for SLAM under significant illumination change – a typical scenario in long-term mapping. Instead of using traditional Bag-of-Words (BoW) image descriptor in comparing the appearance of locations, we use visual features directly to solve the perceptual aliasing that may particularly happen in illumination change caused partially by vector quantization of feature descriptors in image encoding. Efficient data structures such as k-d tree or random k-d forests are exploited to speed up the feature matching with approximate nearest neighbor search to ensure real-time robot exploration, without sacrificing performance at the level of matching locations. In order to deal with the cases in which local features do not work well, for example, in the environment with significant illumination variance where feature repeatability is not guaranteed, we propose to use a whole-image descriptor which is a low dimensional compact representation of image responses to a bank of filters incorporating the structural information (e.g. the edges) of an image to describe the appearance and measure similarities among locations. PCA is employed to transform a high dimensional gist descriptor to a lower dimensional form to improve both computational efficiency and discriminating power of the descriptor. In addition, we use a particle filter to exploit the correlation among images in a sequence captured by the robot in the process of identifying loop closure candidates, making the algorithm highly scalable due to both the compactness of image descriptor and simplicity of particle filtering. Based on the above methods, our final component of the SLAM system is a novel feature matching method for multi-view geometry (MVG) based verification of loop closures in illumination change. To develop such a method that serves as the prerequisite of verification, we exploit the particular camera motion in our application to illustrate that spatial constraint of matching features (or keypoints) derived from optical flow statistics can be used as an important basis in finding true matches. Particularly, by assuming a weak perspective camera model and planar camera motion, we derive a simple constraint on correctly matched keypoints in terms of the flow vectors between two images. We then use this constraint to prune the putative matches to boost the inlier ratio significantly thereby giving the subsequent verification algorithm a chance to succeed.
This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for the purpose of private, scholarly or scientific research. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
Citation for previous publication
Yang Liu and Hong Zhang, Indexing Visual Features: Real-Time Loop Closure Detection Using a Tree Structure, ICRA 2012.Yang Liu and Hong Zhang, Visual Loop Closure Detection with a Compact Image Descriptor, IROS 2012.Yang Liu and Hong Zhang, Towards Improving the Efficiency of Sequence-Based SLAM, ICMA 2013.Yang Liu and Hong Zhang, Performance Evaluation of Whole-Image Descriptors in Visual Loop Closure Detection, ICIA 2013.Yang Liu, Rong Feng and Hong Zhang, Keypoint Matching by Outlier Pruning with Consensus Constraint, ICRA 2015.

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 12115126
Last modified: 2016:06:16 16:59:17-06:00
Filename: Liu_Yang_201604_PhD.pdf
Original checksum: 0336bf7969c0d12bf0879947c0937cd7
Well formed: true
Valid: true
File title: Title Page
File title: Appearance SLAM in Changing Illumination Environment
File author: Yang Liu
Page count: 146
Activity of users you follow
User Activity Date