Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Visual-model-based, real-time 3D pose tracking for autonomous navigation: methodology and experiments

Visual-model-based, real-time 3D pose tracking for autonomous navigation: methodology and... This paper presents a novel 3D-model-based computer-vision method for tracking the full six degree-of-freedom (dof) pose (position and orientation) of a rigid body, in real-time. The methodology has been targeted for autonomous navigation tasks, such as interception of or rendezvous with mobile targets. Tracking an object’s complete six-dof pose makes the proposed algorithm useful even when targets are not restricted to planar motion (e.g., flying or rough-terrain navigation). Tracking is achieved via a combination of textured model projection and optical flow. The main contribution of our work is the novel combination of optical flow with z-buffer depth information that is produced during model projection. This allows us to achieve six-dof tracking with a single camera. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Autonomous Robots Springer Journals

Visual-model-based, real-time 3D pose tracking for autonomous navigation: methodology and experiments

Autonomous Robots , Volume 25 (3) – Jul 9, 2008

Loading next page...
 
/lp/springer-journals/visual-model-based-real-time-3d-pose-tracking-for-autonomous-Wna3PKvEeZ

References (63)

Publisher
Springer Journals
Copyright
Copyright © 2008 by Springer Science+Business Media, LLC
Subject
Engineering; Robotics and Automation; Artificial Intelligence (incl. Robotics); Computer Imaging, Vision, Pattern Recognition and Graphics; Control, Robotics, Mechatronics
ISSN
0929-5593
eISSN
1573-7527
DOI
10.1007/s10514-008-9094-7
Publisher site
See Article on Publisher Site

Abstract

This paper presents a novel 3D-model-based computer-vision method for tracking the full six degree-of-freedom (dof) pose (position and orientation) of a rigid body, in real-time. The methodology has been targeted for autonomous navigation tasks, such as interception of or rendezvous with mobile targets. Tracking an object’s complete six-dof pose makes the proposed algorithm useful even when targets are not restricted to planar motion (e.g., flying or rough-terrain navigation). Tracking is achieved via a combination of textured model projection and optical flow. The main contribution of our work is the novel combination of optical flow with z-buffer depth information that is produced during model projection. This allows us to achieve six-dof tracking with a single camera.

Journal

Autonomous RobotsSpringer Journals

Published: Jul 9, 2008

There are no references for this article.