Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Angle-based homing from a reference image set using the 1D trifocal tensor

Angle-based homing from a reference image set using the 1D trifocal tensor This paper presents a visual homing method for a robot moving on the ground plane. The approach employs a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment, and the current image taken by the robot. We present as contribution a method to obtain the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion and provides important robustness properties to our technique. Another contribution of our paper is a new control law that uses the available angles, with no range information involved, to drive the robot to the goal. Therefore, our method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. We present a formal proof of the stability of the proposed control law. The performance of our approach is illustrated through simulations and different sets of experiments with real images. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Autonomous Robots Springer Journals

Angle-based homing from a reference image set using the 1D trifocal tensor

Autonomous Robots , Volume 34 (2) – Jan 11, 2013

Loading next page...
 
/lp/springer-journals/angle-based-homing-from-a-reference-image-set-using-the-1d-trifocal-4pFVkiPuJP

References (41)

Publisher
Springer Journals
Copyright
Copyright © 2013 by Springer Science+Business Media New York
Subject
Engineering; Robotics and Automation; Control, Robotics, Mechatronics; Artificial Intelligence (incl. Robotics); Computer Imaging, Vision, Pattern Recognition and Graphics
ISSN
0929-5593
eISSN
1573-7527
DOI
10.1007/s10514-012-9313-0
Publisher site
See Article on Publisher Site

Abstract

This paper presents a visual homing method for a robot moving on the ground plane. The approach employs a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment, and the current image taken by the robot. We present as contribution a method to obtain the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion and provides important robustness properties to our technique. Another contribution of our paper is a new control law that uses the available angles, with no range information involved, to drive the robot to the goal. Therefore, our method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. We present a formal proof of the stability of the proposed control law. The performance of our approach is illustrated through simulations and different sets of experiments with real images.

Journal

Autonomous RobotsSpringer Journals

Published: Jan 11, 2013

There are no references for this article.