Download the full-sized PDF of Visual Task Specification User Interface for Uncalibrated Visual ServoingDownload the full-sized PDF



Permanent link (DOI):


Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Graduate Studies and Research, Faculty of


This file is in the following collections:

Theses and Dissertations

Visual Task Specification User Interface for Uncalibrated Visual Servoing Open Access


Other title
User Interface
Visual Task Specification
Human Robot Interaction
Uncalibrated Visual Servoing
Type of item
Degree grantor
University of Alberta
Author or creator
Gridseth, Mona
Supervisor and department
Jagersand, Martin (Computing Science)
Examining committee member and department
Dodds, Zachary (Computer Science, Harvey Mudd College)
Boulanger, Pierre (Computing Science)
Department of Computing Science

Date accepted
Graduation date
Master of Science
Degree level
In today's world robots work well in structured environments, where they complete tasks autonomously and accurately. This is evident from industrial robotics. However, in unstructured and dynamic environments such as for instance homes, hospitals or areas affected by disasters, robots are still not able to be of much assistance. Moreover, robotics research has focused on topics such as mechatronics design, control and autonomy, while fewer works pay attention to human-robot interfacing. This results in an increasing gap between expectations of robotics technology and its real world capabilities. In this work we present a human robot interface for semi-autonomous human-in-the-loop control, that aims to tackle some of the challenges for robotics in unstructured environments. The interface lets a user specify tasks for a robot to complete using uncalibrated visual servoing. Visual servoing is a technique that uses visual input to control a robot. In particular, uncalibrated visual servoing works well in unstructured environments since we do not rely on calibration or other modelling. The user can visually specify high-level tasks by combining a set of geometric constraints. Our interface offers a versatile set of tasks that span both coarse and fine manipulation. The main contribution of this thesis is twofold. First of all we have developed an interface for visual task specification. Second we complete experiments to explore the visual task specification technique and find how to best use it in practice. Finally, we complete experiments to asses the performance of the system.
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (PDF/A)
Mime type: application/pdf
File size: 22227384
Last modified: 2016:06:24 17:44:02-06:00
Filename: Gridseth_Mona_201509_MSc.pdf
Original checksum: cdfbe1ee86fcfc3057bace5bcb28adbb
Activity of users you follow
User Activity Date