Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

In the Mood for Vlog: Multimodal Inference in Conversational Social Video

In the Mood for Vlog: Multimodal Inference in Conversational Social Video In the Mood for Vlog: Multimodal Inference in Conversational Social Video DAIRAZALIA SANCHEZ-CORTES, Idiap Research Institute SHIRO KUMANO and KAZUHIRO OTSUKA, NTT Communication Science Laboratories DANIEL GATICA-PEREZ, Idiap Research Institute and Ecole Polytechnique F´ d´ rale de Lausanne e e (EPFL) The prevalent "share what's on your mind" paradigm of social media can be examined from the perspective of mood: short-term affective states revealed by the shared data. This view takes on new relevance given the emergence of conversational social video as a popular genre among viewers looking for entertainment and among video contributors as a channel for debate, expertise sharing, and artistic expression. From the perspective of human behavior understanding, in conversational social video both verbal and nonverbal information is conveyed by speakers and decoded by viewers. We present a systematic study of classification and ranking of mood impressions in social video, using vlogs from YouTube. Our approach considers eleven natural mood categories labeled through crowdsourcing by external observers on a diverse set of conversational vlogs. We extract a comprehensive number of nonverbal and verbal behavioral cues from the audio and video channels to characterize the mood of vloggers. Then we implement and validate vlog classification and http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Interactive Intelligent Systems (TiiS) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/in-the-mood-for-vlog-multimodal-inference-in-conversational-social-AxOqDRAx1M
Publisher
Association for Computing Machinery
Copyright
Copyright © 2015 by ACM Inc.
ISSN
2160-6455
DOI
10.1145/2641577
Publisher site
See Article on Publisher Site

Abstract

In the Mood for Vlog: Multimodal Inference in Conversational Social Video DAIRAZALIA SANCHEZ-CORTES, Idiap Research Institute SHIRO KUMANO and KAZUHIRO OTSUKA, NTT Communication Science Laboratories DANIEL GATICA-PEREZ, Idiap Research Institute and Ecole Polytechnique F´ d´ rale de Lausanne e e (EPFL) The prevalent "share what's on your mind" paradigm of social media can be examined from the perspective of mood: short-term affective states revealed by the shared data. This view takes on new relevance given the emergence of conversational social video as a popular genre among viewers looking for entertainment and among video contributors as a channel for debate, expertise sharing, and artistic expression. From the perspective of human behavior understanding, in conversational social video both verbal and nonverbal information is conveyed by speakers and decoded by viewers. We present a systematic study of classification and ranking of mood impressions in social video, using vlogs from YouTube. Our approach considers eleven natural mood categories labeled through crowdsourcing by external observers on a diverse set of conversational vlogs. We extract a comprehensive number of nonverbal and verbal behavioral cues from the audio and video channels to characterize the mood of vloggers. Then we implement and validate vlog classification and

Journal

ACM Transactions on Interactive Intelligent Systems (TiiS)Association for Computing Machinery

Published: Jun 30, 2015

There are no references for this article.