Visual State Estimation for Autonomous Navigation

  • Author / Creator
    Salimzadeh, Ali
  • This thesis aims at improving robot perception for autonomous navigation in highly dynamic environments. In the first part of this research, a fixed frame visual localization method utilizing a fisheye monocular camera is proposed to enhance the navigation accuracy for autonomous mobile robots in dynamic indoor environments (with direct application to warehouse or service robotics). The method develops an optimal variance filter with covariance adaptation for visual state estimation and is able to address challenging scenarios due to full/partial occlusion. In the second part of this thesis, a novel and computationally-efficient visual-inertial dynamic object detection and tracking framework is proposed, using onboard visual and inertial sensors for autonomous navigation in highly-dynamic environments to address challenges of existing localization and navigation methods that heavily rely on the assumption of static features in the scene or use learning-based methods to detect dynamic objects. The novel framework combines prediction over inertial data with the measurements from stereo vision-based state estimation to form a stochastic filter with Bayesian tracking for motion classification. In this pipeline, point cloud clustering, disparity map generation, and consistent tracking have also been conducted for both fixed- and moving-frame scenarios.
    The proposed frameworks are experimentally validated in several autonomous navigation scenarios in highly dynamic indoor/outdoor environments and in urban settings. The two distinct solutions presented in this thesis, are designed to resolve challenges imposed by the dynamic nature of the environment including occlusions and unreliable feature selection for localization. The results of this thesis confirm the reliable, consistent, and real-time performance of the developed frameworks in both fixed frame (i.e., infrastructure-based) and moving frame state estimation using multimodal visual-inertial data. Combining the two solutions proposed in this thesis for networked robotic systems and connected autonomous driving leads to more precise, robust, and efficient autonomous navigation systems that can be used for both indoor and outdoor applications.

  • Subjects / Keywords
  • Graduation date
    Fall 2023
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.