Usage
  • 213 views
  • 277 downloads

Motion Analysis based on Significant Points

  • Author / Creator
    Nasim Hajari
  • Motion analysis is very important and it has extensive applications in surveillance, smart rooms, and so on. Different types of sensors can be used to capture the required information. One can use eye-gaze system to record eye motion or Kinect or Leap Motion sensors to record 3d keypoints. Even a conventional digital camera is helpful for motion detection and action recognition. The spatial features of motion data change over time. These features can be the coordinates of significant points, such as fixation points of eyes, skeletal joints and so on, or the location of user defined features such as Histogram of Oriented Gradient (HOG), centre of Hough circles, centre of mass and so on. One can also analyse motion data in 1D, 2D or 3D space, depending on acquisition devices. In this thesis we studied three different applications of motion data and key-point trajectory analysis based on the different factors mentioned above. The first application is analysing eye motion to better understand team cognition between members, specifically surgeons who formed a laparoscopic surgery team. Although team cognition is believed to be the foundation for team performance, there is no direct and objective way to measure it, especially in healthcare settings. Previous studies have shown that spatial features such as overlap analysis of eye-gaze data can be a measure of team cognition. However, due to the dynamic nature of eye-gaze signals, gaze overlap calculated from spatial features is not sufficient; as team members might look at the same surgical spot at different times. Therefore, temporal feature analysis is essential . Here we studied eye-gaze signals of two surgeons throughout a simulated laparoscopic surgery task and distinguished expert teams from novice teams by the level of gaze overlap, the lag and the Recurrence Rate (RR) between two surgeons based on dual eye-tracking evidences. The results obtained in this study support the hypothesis that the top performing teams are better synchronized, show higher eye-gaze overlap and RR, and therefore demonstrate better team cognition. The second application of motion data analysis is human fall detection using a 2D video sequence. Automatic, real time fall detection techniques can improve the life quality for seniors and people with special needs, as falling down can be life threatening for these groups. Computer vision based fall detection systems require less infrastructure and is cheaper and more comfortable for users compared to smart floors or systems based on wearable devices. However, vision based systems can be less accurate and not fast enough if the set of features and detection algorithms are not selected properly or the size and generality of the training dataset does not cover different specifications. Acquiring a general training dataset is very challenging, especially for unknown surveillance regions, such as in smart houses. We proposed a robust and real time, vision based fall detection technique using only a single RGB camera. The proposed method can be applied at frame level and only uses two significant points, head and center of person. Experiment were performed on le2i fall detection dataset which is publicly available. The proposed technique can distinguish falling from everyday actions. It can also work in different indoor environments with different lighting conditions.
The last application is extracting the animation skeleton directly from 3d models regardless of its topology and initial position and orientation. This can be used to automatically animate any arbitrary 3D character, which has many applications in simulation and entertainment. Defining trajectory key-points for 3D characters without manual intervention remains a challenging problem that makes complete automation difficult. To animate an articulated 3D character, a rigging process is needed, during which an animation skeleton needs to be extracted from or be embedded into a 3D model. This tedious process is mainly done manually by expert animators. Most of the automatic rigging techniques proposed in the literature are not fully automatic nor pose invariant, i.e., a front facing model in neutral T-pose is required at the start in order to animate successfully. We proposed incorporating robust skeleton based feature detection, combined with identification of various anatomical characteristics, to extract the desired key-points along with constraint parameters needed for automatic rigging.

  • Subjects / Keywords
  • Graduation date
    Spring 2019
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/r3-ye5n-kh63
  • License
    Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.