Access the full text.
Sign up today, get DeepDyve free for 14 days.
Y. Xiong, S. Shafer (1993)
Depth from focusing and defocusingProceedings of IEEE Conference on Computer Vision and Pattern Recognition
Jae-Hoon Kim, H. Cho (1992)
Real-time determination of a mobile robot's position by linear scanning of a landmarkRobotica, 10
D. Spiliotopoulos, Ion Androutsopoulos, C. Spyropoulos (2001)
HUMAN-ROBOT INTERACTION BASED ON SPOKEN NATURAL LANGUAGE DIALOGUE
J. Horn, G. Schmidt (1995)
Continuous localization of a mobile robot based on 3D-laser-range-data, predicted sensor images, and dead-reckoningRobotics Auton. Syst., 14
A. Schultz, W. Adams (1998)
Continuous localization using evidence gridsProceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), 4
M. Torrance (1994)
Natural communication with robots
S. Ghidary, T. Tani, T. Takamori, M. Hattori (1999)
A new home robot positioning system (HRPS) using IR switched multi ultrasonic sensorsIEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), 4
J. Borenstein, L. Feng (1994)
UMBmark : a method for measuring, comparing, and correcting dead-reckoning errors in mobile robots
(1987)
Focusing. Int. Journal of Computer Vision
W. Alford, T. Rogers, D. Wilkes, K. Kawamura (1999)
Multi-agent system for a human-friendly robotIEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), 2
S. Ghidary, Y. Nakata, T. Takamori, M. Hattori (2000)
Human detection and localization at indoor environment by home robotSmc 2000 conference proceedings. 2000 ieee international conference on systems, man and cybernetics. 'cybernetics evolving to systems, humans, organizations, and their complex interactions' (cat. no.0, 2
(2002)
Kobe University in 2000. He graduated from Graduate School of Science and Technology His research interests include vision and speech processing
A. Agah (2000)
Human interactions with intelligent systems: research taxonomyComput. Electr. Eng., 27
J. Crowley, H. Christensen (1995)
Vision as Process
K. Koh, J. Kim, H. Cho (1994)
A position estimation system for mobile robots using a monocular image of a 3-D landmarkRobotica, 12
R. Thisted, R. Farebrother (1988)
Linear least squares computationsTechnometrics, 33
M. Subbarao, T. Choi, A. Nikzad (1992)
Focusing techniques, 1823
Jie Yang, Weier Lu, A. Waibel (1998)
Skin-Color Modeling and Adaptation
R. Bischoff, Tamhant Jain (1999)
Natural Communication and Interaction with Humanoid Robots
T. Arai, E. Nakano (1983)
Development of Measuring Equipment for Location and Direction (MELODI) Using Ultrasonic WavesJournal of Dynamic Systems Measurement and Control-transactions of The Asme, 105
H. Beom, H. Cho (1995)
Mobile robot localization using a single rotating sonar and two passive cylindrical beaconsRobotica, 13
T. Matsui, H. Asoh, John Fry, Y. Motomura, F. Asano, Takio Kurita, Isao Hara, N. Otsu (1999)
Integrated Natural Spoken Dialogue System of Jijo-2 Mobile Robot for Office Services
L. Kleeman (1992)
Optimal estimation of position and heading for mobile robots using ultrasonic beacons and dead-reckoningProceedings 1992 IEEE International Conference on Robotics and Automation
A. Pentland (1985)
A New Sense for Depth of FieldIEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9
(2001)
Now he is a graduate student at Takamori-Tadokoro Laboratory, Kobe University
(1996)
A comparative study of three paradigms for object recognition: Bayesian neural networks and expert systems
X. Yin, G. Dong, Ming Xie (2001)
Hand image segmentation using color and RCE neural networkRobotics Auton. Syst., 34
A.P. Pentland (1987)
A new sense for depth of fieldIEEE Trans. Patt. and Machine Intell. PAMI, 9
E. Krotkov (2004)
FocusingInternational Journal of Computer Vision, 1
M. Seeburger (1967)
The New HomeThe Palimpsest
(1999)
Where are you going little robot?, Prospects of human-robot interaction
(1991)
Ghidary received his B.E. degree in Electronics and M.Sc. in Computer Architecture from Amir Kabir University, Tehran
Jie Yang, A. Waibel (1996)
A real-time face trackerProceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.
Autonomous Robots – Springer Journals
Published: Oct 10, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.