Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Ensemble majority voting classifier for speech emotion recognition and prediction

Ensemble majority voting classifier for speech emotion recognition and prediction Purpose – The purpose of this paper is to understand the emotional state of a human being by capturing the speech utterances that are used during common conversation. Human beings except of thinking creatures are also sentimental and emotional organisms. There are six universal basic emotions plus a neutral emotion: happiness, surprise, fear, sadness, anger, disgust and neutral. Design/methodology/approach – It is proved that, given enough acoustic evidence, the emotional state of a person can be classified by an ensemble majority voting classifier. The proposed ensemble classifier is constructed over three base classifiers: k nearest neighbors, C4.5 and support vector machine (SVM) polynomial kernel. Findings – The proposed ensemble classifier achieves better performance than each base classifier. It is compared with two other ensemble classifiers: one‐against‐all (OAA) multiclass SVM with radial basis function kernels and OAA multiclass SVM with hybrid kernels. The proposed ensemble classifier achieves better performance than the other two ensemble classifiers. Originality/value – The current paper performs emotion classification with an ensemble majority voting classifier that combines three certain types of base classifiers which are of low computational complexity. The base classifiers stem from different theoretical background to avoid bias and redundancy. It gives to the proposed ensemble classifier the ability to be generalized in the emotion domain space. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Systems and Information Technology Emerald Publishing

Ensemble majority voting classifier for speech emotion recognition and prediction

Loading next page...
 
/lp/emerald-publishing/ensemble-majority-voting-classifier-for-speech-emotion-recognition-and-TeuemQs070

References (33)

Publisher
Emerald Publishing
Copyright
Copyright © 2014 Emerald Group Publishing Limited. All rights reserved.
ISSN
1328-7265
DOI
10.1108/JSIT-01-2014-0009
Publisher site
See Article on Publisher Site

Abstract

Purpose – The purpose of this paper is to understand the emotional state of a human being by capturing the speech utterances that are used during common conversation. Human beings except of thinking creatures are also sentimental and emotional organisms. There are six universal basic emotions plus a neutral emotion: happiness, surprise, fear, sadness, anger, disgust and neutral. Design/methodology/approach – It is proved that, given enough acoustic evidence, the emotional state of a person can be classified by an ensemble majority voting classifier. The proposed ensemble classifier is constructed over three base classifiers: k nearest neighbors, C4.5 and support vector machine (SVM) polynomial kernel. Findings – The proposed ensemble classifier achieves better performance than each base classifier. It is compared with two other ensemble classifiers: one‐against‐all (OAA) multiclass SVM with radial basis function kernels and OAA multiclass SVM with hybrid kernels. The proposed ensemble classifier achieves better performance than the other two ensemble classifiers. Originality/value – The current paper performs emotion classification with an ensemble majority voting classifier that combines three certain types of base classifiers which are of low computational complexity. The base classifiers stem from different theoretical background to avoid bias and redundancy. It gives to the proposed ensemble classifier the ability to be generalized in the emotion domain space.

Journal

Journal of Systems and Information TechnologyEmerald Publishing

Published: Aug 5, 2014

Keywords: Speech emotion recognition; Affective computing; Machine learning

There are no references for this article.