Access the full text.
Sign up today, get DeepDyve free for 14 days.
Jiamin Fu, Qi-rong Mao, Juanjuan Tu, Yongzhao Zhan (2019)
Multimodal shared features learning for emotion recognition by enhanced sparse local discriminative canonical correlation analysisMultimedia Systems
Amir Zadeh, P. Liang, N. Mazumder, Soujanya Poria, E. Cambria, Louis-Philippe Morency (2018)
Memory Fusion Network for Multi-view Sequential LearningArXiv, abs/1802.00927
K. Simonyan, Andrew Zisserman (2014)
Very Deep Convolutional Networks for Large-Scale Image RecognitionCoRR, abs/1409.1556
Yazhou Zhang, D. Song, P. Zhang, Panpan Wang, Jingfei Li, Xiang Li, Benyou Wang (2018)
A quantum-inspired multimodal sentiment analysis frameworkTheor. Comput. Sci., 752
G. Degottex, John Kane, Thomas Drugman, T. Raitio, Stefan Scherer (2014)
COVAREP — A collaborative voice analysis repository for speech technologies2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
S. Amiriparian, Maurice Gerczuk, Sandra Ottl, N. Cummins, M. Freitag, Sergey Pugachevskiy, Alice Baird, Björn Schuller (2017)
Snore Sound Classification Using Image-Based Deep Spectrum Features
T. Dalgleish (2004)
Basic Emotions
Amol Patwardhan (2017)
Multimodal mixed emotion detection2017 2nd International Conference on Communication and Electronics Systems (ICCES)
Didan Deng, Yuqian Zhou, Jimin Pi, Bertram Shi (2018)
Multimodal Utterance-level Affect Analysis using Visual, Audio and Text FeaturesArXiv, abs/1805.00625
Damien Teney, Peter Anderson, Xiaodong He, A. Hengel (2017)
Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Louis-Philippe Morency, Rada Mihalcea, Payal Doshi (2011)
Towards multimodal sentiment analysis: harvesting opinions from the web
Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, E. Xing (2016)
Select-additive learning: Improving generalization in multimodal sentiment analysis2017 IEEE International Conference on Multimedia and Expo (ICME)
Devendra Singh Chaplot, Lisa Lee, Ruslan Salakhutdinov, Devi Parikh, Dhruv Batra (2019)
Embodied multimodal multitask learningCoRR abs/1902.01385 (2019). arxiv:1902.01385
Jan Deriu, Mark Cieliebak (2016)
Sentiment Detection using Convolutional Neural Networks with Multi-Task Training and Distant Supervision
Edmund Tong, Amir Zadeh, Cara Jones, Louis-Philippe Morency (2017)
Combating Human Trafficking with Multimodal Deep Models
Amir Zadeh, Rowan Zellers, Eli Pincus, Louis-Philippe Morency (2016)
Multimodal Sentiment Intensity Analysis in Videos: Facial Gestures and Verbal MessagesIEEE Intelligent Systems, 31
Devamanyu Hazarika, Sruthi Gorantla, Soujanya Poria, Roger Zimmermann (2018)
Self-Attentive Feature-Level Fusion for Multimodal Emotion Detection2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)
Panagiotis Tzirakis, George Trigeorgis, M. Nicolaou, Björn Schuller, S. Zafeiriou (2017)
End-to-End Multimodal Emotion Recognition Using Deep Neural NetworksIEEE Journal of Selected Topics in Signal Processing, 11
Dushyant Chauhan, Md. Akhtar, Asif Ekbal, P. Bhattacharyya (2019)
Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis
Soujanya Poria, Haiyun Peng, A. Hussain, N. Howard, E. Cambria (2017)
Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysisNeurocomputing, 261
Kaiming He, X. Zhang, Shaoqing Ren, Jian Sun (2015)
Deep Residual Learning for Image Recognition2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
F. Rahdari, E. Rashedi, M. Eftekhari (2018)
A Multimodal Emotion Recognition System Using Facial Landmark AnalysisIranian Journal of Science and Technology, Transactions of Electrical Engineering, 43
Georgios Balikas, Simon Moura, Massih-Reza Amini (2017)
Multitask Learning for Fine-Grained Twitter Sentiment AnalysisProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
Shuiwang Ji, Wei Xu, Ming Yang, Kai Yu (2013)
3D convolutional neural networks for human action recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence, 35
Nathaniel Blanchard, Daniel Moreira, Aparna Bharati, W. Scheirer (2018)
Getting the subtext without the text: Scalable multimodal sentiment classification from visual and acoustic modalitiesArXiv, abs/1807.01122
B. Pang, Lillian Lee (2005)
Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales
Nan Xu, W. Mao (2017)
MultiSentiNet: A Deep Semantic Network for Multimodal Sentiment AnalysisProceedings of the 2017 ACM on Conference on Information and Knowledge Management
Amir Zadeh, Minghai Chen, Soujanya Poria, E. Cambria, Louis-Philippe Morency (2017)
Tensor Fusion Network for Multimodal Sentiment Analysis
Maximilian Schmitt, Björn Schuller (2016)
openXBOW - Introducing the Passau Open-Source Crossmodal Bag-of-Words ToolkitJ. Mach. Learn. Res., 18
Amir Zadeh, P. Liang, Soujanya Poria, Prateek Vij, E. Cambria, Louis-Philippe Morency (2018)
Multi-attention Recurrent Network for Human Communication ComprehensionProceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, 2018
Jennifer Williams, S. Kleinegesse, Ramona Comanescu, Oana Radu (2018)
Recognizing Emotions in Video Using Multimodal DNN Feature Fusion
S. Rajagopalan, Louis-Philippe Morency, T. Baltrušaitis, Roland Göcke (2016)
Extending Long Short-Term Memory for Multi-View Structured Learning
T. Baltrušaitis, P. Robinson, Louis-Philippe Morency (2016)
OpenFace: An open source facial behavior analysis toolkit2016 IEEE Winter Conference on Applications of Computer Vision (WACV)
Behnaz Nojavanasghari, D. Gopinath, Jayanth Koushik, T. Baltrušaitis, Louis-Philippe Morency (2016)
Deep multimodal fusion for persuasiveness predictionProceedings of the 18th ACM International Conference on Multimodal Interaction
Quanzeng You, Jiebo Luo, Hailin Jin, Jianchao Yang (2016)
Cross-modality Consistent Regression for Joint Visual-Textual Sentiment Analysis of Social MultimediaProceedings of the Ninth ACM International Conference on Web Search and Data Mining
Saurav Sahay, Shachi Kumar, Rui Xia, Jonathan Huang, L. Nachman (2018)
Multimodal Relational Tensor Network for Sentiment and Emotion ClassificationArXiv, abs/1806.02923
Sauhaarda Chowdhuri, T. Pankaj, K. Zipser (2017)
MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving2019 IEEE Winter Conference on Applications of Computer Vision (WACV)
Suyash Sangwan, Dushyant Chauhan, Md. Akhtar, Asif Ekbal, P. Bhattacharyya (2019)
Multi-task Gated Contextual Cross-Modal Attention Framework for Sentiment and Emotion Analysis
Kun-Yi Huang, Chung-Hsien Wu, Qian-Bei Hong, Ming-Hsiang Su, Yi-Hsuan Chen (2019)
Speech Emotion Recognition Using Deep Neural Network Considering Verbal and Nonverbal Speech SoundsICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
I. Sheikh, Sri Dumpala, Rupayan Chakraborty, Sunil Kopparapu (2018)
Sentiment Analysis using Imperfect Views from Spoken Language and Acoustic Modalities
Jan Milan Deriu, Mark Cieliebak (2016)
Sentiment analysis using convolutional neural networks with multi-task training and distant supervision on Italian tweetsProceedings of the 5th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, 2016
Deepanway Ghosal, Md. Akhtar, Dushyant Chauhan, Soujanya Poria, Asif Ekbal, P. Bhattacharyya (2018)
Contextual Inter-modal Attention for Multi-modal Sentiment Analysis
Shuiwang Ji, W. Xu, Ming Yang, Kai Yu
Ieee Transactions on Pattern Analysis and Machine Intelligence 1 3d Convolutional Neural Networks for Human Action Recognition
Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, R. Salakhutdinov (2016)
Gated-Attention Readers for Text ComprehensionArXiv, abs/1606.01549
Rui Xia, Yang Liu (2017)
A Multi-Task Learning Framework for Emotion Recognition Using 2D Continuous SpaceIEEE Transactions on Affective Computing, 8
A. Sloman (1999)
Review of : Affective Computing
Suyash Sangwan, Dushyant Singh Chauhan, Md (2019)
Shad Akhtar, Asif Ekbal, and Pushpak Bhattacharyya
N. Cummins, S. Amiriparian, Sandra Ottl, Maurice Gerczuk, Maximilian Schmitt, Björn Schuller (2018)
Multimodal Bag-of-Words for Cross Domains Sentiment Analysis2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
E. Cambria, Soujanya Poria, A. Hussain (2019)
Speaker-Independent Multimodal Sentiment Analysis for Big DataMultimodal Analytics for Next-Generation Big Data Technologies and Applications
C. Lee, Kyu Song, Jihoon Jeong, W. Choi (2018)
Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text DataArXiv, abs/1805.06606
Minghai Chen, Sen Wang, P. Liang, T. Baltrušaitis, Amir Zadeh, Louis-Philippe Morency (2017)
Multimodal sentiment analysis with word-level fusion and reinforcement learningProceedings of the 19th ACM International Conference on Multimodal Interaction
L. Breiman (2001)
Random ForestsMachine Learning, 45
Devendra Chaplot, Lisa Lee, R. Salakhutdinov, Devi Parikh, Dhruv Batra (2020)
Hypothesis Sketching for Online Kernel Selection in Continuous Kernel Space
Jeffrey Pennington, R. Socher, Christopher Manning (2014)
GloVe: Global Vectors for Word Representation
Md. Akhtar, Dushyant Chauhan, Deepanway Ghosal, Soujanya Poria, Asif Ekbal, P. Bhattacharyya (2019)
Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis
Soujanya Poria, E. Cambria, Alexander Gelbukh (2016)
Aspect extraction for opinion mining with a deep convolutional neural networkKnowl. Based Syst., 108
Hiranmayi Ranganathan, Shayok Chakraborty, S. Panchanathan (2016)
Multimodal emotion recognition using deep learning architectures2016 IEEE Winter Conference on Applications of Computer Vision (WACV)
Soujanya Poria, E. Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency (2017)
Context-Dependent Sentiment Analysis in User-Generated Videos
Soujanya Poria, I. Chaturvedi, E. Cambria, A. Hussain (2016)
Convolutional MKL Based Multimodal Emotion Recognition and Sentiment Analysis2016 IEEE 16th International Conference on Data Mining (ICDM)
Soujanya Poria, E. Cambria, Rajiv Bajpai, A. Hussain (2017)
A review of affective computing: From unimodal analysis to multimodal fusionInf. Fusion, 37
Amir Zadeh, P. Liang, Soujanya Poria, E. Cambria, Louis-Philippe Morency (2018)
Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
R. Beard, Ritwik Das, Raymond Ng, P. Gopalakrishnan, Luka Eerens, P. Swietojanski, O. Mikšík (2018)
Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition
Mathieu Fortin, B. Chaib-draa (2019)
Multimodal Multitask Emotion Recognition using Images, Texts and TagsProceedings of the ACM Workshop on Crossmodal Learning and Application
Soujanya Poria, E. Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency (2017)
Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis2017 IEEE International Conference on Data Mining (ICDM)
Multi-modal affect analysis (e.g., sentiment and emotion analysis) is an interdisciplinary study and has been an emerging and prominent field in Natural Language Processing and Computer Vision. The effective fusion of multiple modalities (e.g., text, acoustic, or visual frames) is a non-trivial task, as these modalities, often, carry distinct and diverse information, and do not contribute equally. The issue further escalates when these data contain noise. In this article, we study the concept of multi-task learning for multi-modal affect analysis and explore a contextual inter-modal attention framework that aims to leverage the association among the neighboring utterances and their multi-modal information. In general, sentiments and emotions have inter-dependence on each other (e.g., anger → negative or happy → positive). In our current work, we exploit the relatedness among the participating tasks in the multi-task framework. We define three different multi-task setups, each having two tasks, i.e., sentiment 8 emotion classification, sentiment classification 8 sentiment intensity prediction, and emotion classificati on 8 emotion intensity prediction. Our evaluation of the proposed system on the CMU-Multi-modal Opinion Sentiment and Emotion Intensity benchmark dataset suggests that, in comparison with the single-task learning framework, our multi-task framework yields better performance for the inter-related participating tasks. Further, comparative studies show that our proposed approach attains state-of-the-art performance for most of the cases.
ACM Transactions on Knowledge Discovery from Data (TKDD) – Association for Computing Machinery
Published: May 8, 2020
Keywords: Multi-task learning
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.