Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Measuring emotions during learning: lack of coherence between automated facial emotion recognition and emotional experience

Measuring emotions during learning: lack of coherence between automated facial emotion... AbstractMeasuring emotions non-intrusively via affective computing provides a promising source of information for adaptive learning and intelligent tutoring systems. Using non-intrusive, simultaneous measures of emotions, such systems could steadily adapt to students emotional states. One drawback, however, is the lack of evidence on how such modern measures of emotions relate to traditional self-reports. The aim of this study was to compare a prominent area of affective computing, facial emotion recognition, to students’ self-reports of interest, boredom, and valence. We analyzed different types of aggregation of the simultaneous facial emotion recognition estimates and compared them to self-reports after reading a text. Analyses of 103 students revealed no relationship between the aggregated facial emotion recognition estimates of the software FaceReader and self-reports. Irrespective of different types of aggregation of the facial emotion recognition estimates, neither the epistemic emotions (i.e., boredom and interest), nor the estimates of valence predicted the respective self-report measure. We conclude that assumptions on the subjective experience of emotions cannot necessarily be transferred to other emotional components, such as estimated by affective computing. We advise to wait for more comprehensive evidence on the predictive validity of facial emotion recognition for learning before relying on it in educational practice. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Open Computer Science de Gruyter

Measuring emotions during learning: lack of coherence between automated facial emotion recognition and emotional experience

Loading next page...
 
/lp/de-gruyter/measuring-emotions-during-learning-lack-of-coherence-between-automated-dWjET7KA5W

References (54)

Publisher
de Gruyter
Copyright
© 2019 Franziska Hirt et al., published by De Gruyter
eISSN
2299-1093
DOI
10.1515/comp-2019-0020
Publisher site
See Article on Publisher Site

Abstract

AbstractMeasuring emotions non-intrusively via affective computing provides a promising source of information for adaptive learning and intelligent tutoring systems. Using non-intrusive, simultaneous measures of emotions, such systems could steadily adapt to students emotional states. One drawback, however, is the lack of evidence on how such modern measures of emotions relate to traditional self-reports. The aim of this study was to compare a prominent area of affective computing, facial emotion recognition, to students’ self-reports of interest, boredom, and valence. We analyzed different types of aggregation of the simultaneous facial emotion recognition estimates and compared them to self-reports after reading a text. Analyses of 103 students revealed no relationship between the aggregated facial emotion recognition estimates of the software FaceReader and self-reports. Irrespective of different types of aggregation of the facial emotion recognition estimates, neither the epistemic emotions (i.e., boredom and interest), nor the estimates of valence predicted the respective self-report measure. We conclude that assumptions on the subjective experience of emotions cannot necessarily be transferred to other emotional components, such as estimated by affective computing. We advise to wait for more comprehensive evidence on the predictive validity of facial emotion recognition for learning before relying on it in educational practice.

Journal

Open Computer Sciencede Gruyter

Published: Jan 1, 2019

There are no references for this article.