Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Adaptive Body Gesture Representation for Automatic Emotion Recognition

Adaptive Body Gesture Representation for Automatic Emotion Recognition We present a computational model and a system for the automated recognition of emotions starting from full-body movement. Three-dimensional motion data of full-body movements are obtained either from professional optical motion-capture systems (Qualisys) or from low-cost RGB-D sensors (Kinect and Kinect2). A number of features are then automatically extracted at different levels, from kinematics of a single joint to more global expressive features inspired by psychology and humanistic theories (e.g., contraction index, fluidity, and impulsiveness). An abstraction layer based on dictionary learning further processes these movement features to increase the model generality and to deal with intraclass variability, noise, and incomplete information characterizing emotion expression in human movement. The resulting feature vector is the input for a classifier performing real-time automatic emotion recognition based on linear support vector machines. The recognition performance of the proposed model is presented and discussed, including the tradeoff between precision of the tracking measures (we compare the Kinect RGB-D sensor and the Qualisys motion-capture system) versus dimension of the training dataset. The resulting model and system have been successfully applied in the development of serious games for helping autistic children learn to recognize and express emotions by means of their full-body movement. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Interactive Intelligent Systems (TiiS) Association for Computing Machinery

Adaptive Body Gesture Representation for Automatic Emotion Recognition

Loading next page...
 
/lp/association-for-computing-machinery/adaptive-body-gesture-representation-for-automatic-emotion-recognition-jV0V2FES2G

References (71)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2016 ACM
ISSN
2160-6455
eISSN
2160-6463
DOI
10.1145/2818740
Publisher site
See Article on Publisher Site

Abstract

We present a computational model and a system for the automated recognition of emotions starting from full-body movement. Three-dimensional motion data of full-body movements are obtained either from professional optical motion-capture systems (Qualisys) or from low-cost RGB-D sensors (Kinect and Kinect2). A number of features are then automatically extracted at different levels, from kinematics of a single joint to more global expressive features inspired by psychology and humanistic theories (e.g., contraction index, fluidity, and impulsiveness). An abstraction layer based on dictionary learning further processes these movement features to increase the model generality and to deal with intraclass variability, noise, and incomplete information characterizing emotion expression in human movement. The resulting feature vector is the input for a classifier performing real-time automatic emotion recognition based on linear support vector machines. The recognition performance of the proposed model is presented and discussed, including the tradeoff between precision of the tracking measures (we compare the Kinect RGB-D sensor and the Qualisys motion-capture system) versus dimension of the training dataset. The resulting model and system have been successfully applied in the development of serious games for helping autistic children learn to recognize and express emotions by means of their full-body movement.

Journal

ACM Transactions on Interactive Intelligent Systems (TiiS)Association for Computing Machinery

Published: Mar 9, 2016

Keywords: Body

There are no references for this article.