Access the full text.
Sign up today, get DeepDyve free for 14 days.
JA Hesch, DG Kottas, Sean L Bowman, SI Roumeliotis (2014)
Camera-IMU-based localization: Observability analysis and consistency improvementThe International Journal of Robotics Research, 33
O Kähler, VA Prisacariu, CY Ren, X Sun, PHS Torr, DW Murray (2015)
Very high frame rate volumetric integration of depth images on mobile devicesIEEE Transactions on Visualization and Computer Graphics, 21
TM Chilimbi, MD Hill, JR Larus (2000)
Making pointer-based data structures cache consciousComputer, 33
V Lepetit, F Moreno-Noguer, P Fua (2009)
Epnp: An accurate o (n) solution to the PnP problemInternational Journal of Computer Vision, 81
A Elfes (1989)
Using occupancy grids for mobile robot perception and navigationComputer, 22
J Chen, D Bautembach, S Izadi (2013)
Scalable real-time volumetric surface reconstructionACM Transactions on Graphics (TOG), 32
J Amanatides, A Woo (1987)
A fast voxel traversal algorithm for ray tracingEurographics, 87
Google’s Project Tango has made integrated depth sensing and onboard visual-intertial odometry available to mobile devices such as phones and tablets. In this work, we explore the problem of large-scale, real-time 3D reconstruction on a mobile devices of this type. Solving this problem is a necessary prerequisite for many indoor applications, including navigation, augmented reality and building scanning. The main challenges include dealing with noisy and low-frequency depth data and managing limited computational and memory resources. State of the art approaches in large-scale dense reconstruction require large amounts of memory and high-performance GPU computing. Other existing 3D reconstruction approaches on mobile devices either only build a sparse reconstruction, offload their computation to other devices, or require long post-processing to extract the geometric mesh. In contrast, we can reconstruct and render a global mesh on the fly, using only the mobile device’s CPU, in very large (300 m $$^2$$ 2 ) scenes, at a resolutions of 2–3 cm. To achieve this, we divide the scene into spatial volumes indexed by a hash map. Each volume contains the truncated signed distance function for that area of space, as well as the mesh segment derived from the distance function. This approach allows us to focus computational and memory resources only in areas of the scene which are currently observed, as well as leverage parallelization techniques for multi-core processing. Furthermore, we describe an on-device post-processing method for fusing datasets from multiple, independent trials, in order to improve the quality and coverage of the reconstruction. We discuss how the particularities of the devices impact our algorithm and implementation decisions. Finally, we provide both qualitative and quantitative results on publicly available RGB-D datasets, and on datasets collected in real-time from two devices.
Autonomous Robots – Springer Journals
Published: Feb 24, 2017
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.