Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Added Value of Gaze-Exploiting Semantic Representation to Allow Robots Inferring Human Behaviors

Added Value of Gaze-Exploiting Semantic Representation to Allow Robots Inferring Human Behaviors Neuroscience studies have shown that incorporating gaze view with third view perspective has a great influence to correctly infer human behaviors. Given the importance of both first and third person observations for the recognition of human behaviors, we propose a method that incorporates these observations in a technical system to enhance the recognition of human behaviors, thus improving beyond third person observations in a more robust human activity recognition system. First, we present the extension of our proposed semantic reasoning method by including gaze data and external observations as inputs to segment and infer human behaviors in complex real-world scenarios. Then, from the obtained results we demonstrate that the combination of gaze and external input sources greatly enhance the recognition of human behaviors. Our findings have been applied to a humanoid robot to online segment and recognize the observed human activities with better accuracy when using both input sources; for example, the activity recognition increases from 77% to 82% in our proposed pancake-making dataset. To provide completeness of our system, we have evaluated our approach with another dataset with a similar setup as the one proposed in this work, that is, the CMU-MMAC dataset. In this case, we improved the recognition of the activities for the egg scrambling scenario from 54% to 86% by combining the external views with the gaze information, thus showing the benefit of incorporating gaze information to infer human behaviors across different datasets. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Interactive Intelligent Systems (TiiS) Association for Computing Machinery

Added Value of Gaze-Exploiting Semantic Representation to Allow Robots Inferring Human Behaviors

Loading next page...
 
/lp/association-for-computing-machinery/added-value-of-gaze-exploiting-semantic-representation-to-allow-robots-UqiYXaig41

References (76)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2017 ACM
ISSN
2160-6455
eISSN
2160-6463
DOI
10.1145/2939381
Publisher site
See Article on Publisher Site

Abstract

Neuroscience studies have shown that incorporating gaze view with third view perspective has a great influence to correctly infer human behaviors. Given the importance of both first and third person observations for the recognition of human behaviors, we propose a method that incorporates these observations in a technical system to enhance the recognition of human behaviors, thus improving beyond third person observations in a more robust human activity recognition system. First, we present the extension of our proposed semantic reasoning method by including gaze data and external observations as inputs to segment and infer human behaviors in complex real-world scenarios. Then, from the obtained results we demonstrate that the combination of gaze and external input sources greatly enhance the recognition of human behaviors. Our findings have been applied to a humanoid robot to online segment and recognize the observed human activities with better accuracy when using both input sources; for example, the activity recognition increases from 77% to 82% in our proposed pancake-making dataset. To provide completeness of our system, we have evaluated our approach with another dataset with a similar setup as the one proposed in this work, that is, the CMU-MMAC dataset. In this case, we improved the recognition of the activities for the egg scrambling scenario from 54% to 86% by combining the external views with the gaze information, thus showing the benefit of incorporating gaze information to infer human behaviors across different datasets.

Journal

ACM Transactions on Interactive Intelligent Systems (TiiS)Association for Computing Machinery

Published: Mar 23, 2017

Keywords: Robot learning by observation

There are no references for this article.