Download PDFOpen PDF in browser

Audio Sentiment Analysis by Heterogeneous Signal Features Learned from Utterance-Based Parallel Neural Network

EasyChair Preprint no. 668, version 2

Versions: 12history
18 pagesDate: December 13, 2018


Audio Sentiment Analysis is a popular research area which extends the text-based sentiment analysis to depend on effectiveness of acoustic features extracted from speech. However, current progress on audio sentiment analysis mainly focuses on extracting homogeneous acoustic features or doesn't fuse heterogeneous features effectively. In this paper, we propose an utterance-based deep neural network model, which has a parallel combination of CNN and LSTM based network, to obtain representative features termed Audio Sentiment Vector (ASV), that can maximally reflect sentiment information in an audio. Specifically, our model is trained by utterance-level labels and ASV can be extracted and fused creatively from two branches. In the CNN model branch, spectrum graphs produced by signals are fed as inputs while in the LSTM model branch, inputs include spectral centroid, MFCC and other recognized traditional acoustic features extracted from dependent utterances in an audio. Besides, BiLSTM with attention mechanism is used for feature fusion. Extensive experiments have been conducted to show our model can recognize audio sentiment precisely, and demonstrate our ASV are better than traditional acoustic features or vectors extracted from other deep learning models. Furthermore, experimental results indicate that the proposed model outperforms state-of-the-art approaches by 9.33% on MOSI.

Keyphrases: Audio Sentiment Analysis, feature fusion, signal processing

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Ziqian Luo and Hua Xu and Feiyang Chen},
  title = {Audio Sentiment Analysis by Heterogeneous Signal Features Learned from Utterance-Based Parallel Neural Network},
  howpublished = {EasyChair Preprint no. 668},
  doi = {10.29007/7mhj},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser