Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Music emotion recognition method based on multi feature fusion

Music emotion recognition method based on multi feature fusion There are some problems in music emotion recognition, such as large root mean square error of recognition results and low Pearson correlation coefficient. The music signal is divided into frames by window function, the noise in the music signal is reduced by the time domain endpoint detection, and the music signal is preprocessed. The characteristics of pitch change, gene rise and fall, speech speed and gene slope were extracted by Mehr frequency cepstrum coefficient. According to the extracted music emotion features, the multi-feature fusion kernel function is constructed. Based on the fusion results, the multi-level SVM emotion recognition model is built with the support vector mechanism to realise music emotion recognition. Experimental results show that the root mean square error of the proposed method is always within the range of 0.02, and the highest Pearson correlation coefficient is about 0.9. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Arts and Technology Inderscience Publishers

Music emotion recognition method based on multi feature fusion

International Journal of Arts and Technology , Volume 14 (1): 14 – Jan 1, 2022

Loading next page...
 
/lp/inderscience-publishers/music-emotion-recognition-method-based-on-multi-feature-fusion-rAPCFu6ZxI
Publisher
Inderscience Publishers
Copyright
Copyright © Inderscience Enterprises Ltd
ISSN
1754-8853
eISSN
1754-8861
DOI
10.1504/ijart.2022.122447
Publisher site
See Article on Publisher Site

Abstract

There are some problems in music emotion recognition, such as large root mean square error of recognition results and low Pearson correlation coefficient. The music signal is divided into frames by window function, the noise in the music signal is reduced by the time domain endpoint detection, and the music signal is preprocessed. The characteristics of pitch change, gene rise and fall, speech speed and gene slope were extracted by Mehr frequency cepstrum coefficient. According to the extracted music emotion features, the multi-feature fusion kernel function is constructed. Based on the fusion results, the multi-level SVM emotion recognition model is built with the support vector mechanism to realise music emotion recognition. Experimental results show that the root mean square error of the proposed method is always within the range of 0.02, and the highest Pearson correlation coefficient is about 0.9.

Journal

International Journal of Arts and TechnologyInderscience Publishers

Published: Jan 1, 2022

There are no references for this article.