Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Model for the Omnidirectional Acquisition and Rendering of Stereoscopic Images for Human Viewing

A Model for the Omnidirectional Acquisition and Rendering of Stereoscopic Images for Human Viewing Interactive visual media enable the visualization and navigation of remote-world locations in all gaze directions. A large segment of such media is created using pictures from the remote sites thanks to the advance in panoramic cameras. A desirable enhancement is to facilitate the stereoscopic visualization of remote scenes in all gaze directions. In this context, a model for the signal to be acquired by an omnistereoscopic sensor is needed in order to design better acquisition strategies. This omnistereoscopic viewing model must take into account the geometric constraints imposed by our binocular vision system since we want to produce stereoscopic imagery capable to induce stereopsis consistently in any gaze direction; in this paper, we present such model. In addition, we discuss different approaches to sample or to approximate this function and we propose a general acquisition model for sampling the omnistereoscopic light signal. From this model, we propose that by acquiring and mosaicking sparse sets of partially overlapped stereoscopic snapshots, a satisfactory illusion of depth can be evoked. Finally, we show an example of the rendering pipeline to create the omnistereoscopic imagery. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png 3D Research Springer Journals

A Model for the Omnidirectional Acquisition and Rendering of Stereoscopic Images for Human Viewing

3D Research , Volume 6 (4) – Oct 1, 2015

Loading next page...
 
/lp/springer-journals/a-model-for-the-omnidirectional-acquisition-and-rendering-of-8jUxTPz6up

References (46)

Publisher
Springer Journals
Copyright
Copyright © 2015 by 3D Research Center, Kwangwoon University and Springer-Verlag Berlin Heidelberg
Subject
Engineering; Signal, Image and Speech Processing; Computer Imaging, Vision, Pattern Recognition and Graphics; Optics, Optoelectronics, Plasmonics and Optical Devices
eISSN
2092-6731
DOI
10.1007/s13319-015-0069-0
Publisher site
See Article on Publisher Site

Abstract

Interactive visual media enable the visualization and navigation of remote-world locations in all gaze directions. A large segment of such media is created using pictures from the remote sites thanks to the advance in panoramic cameras. A desirable enhancement is to facilitate the stereoscopic visualization of remote scenes in all gaze directions. In this context, a model for the signal to be acquired by an omnistereoscopic sensor is needed in order to design better acquisition strategies. This omnistereoscopic viewing model must take into account the geometric constraints imposed by our binocular vision system since we want to produce stereoscopic imagery capable to induce stereopsis consistently in any gaze direction; in this paper, we present such model. In addition, we discuss different approaches to sample or to approximate this function and we propose a general acquisition model for sampling the omnistereoscopic light signal. From this model, we propose that by acquiring and mosaicking sparse sets of partially overlapped stereoscopic snapshots, a satisfactory illusion of depth can be evoked. Finally, we show an example of the rendering pipeline to create the omnistereoscopic imagery.

Journal

3D ResearchSpringer Journals

Published: Oct 1, 2015

There are no references for this article.