Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Augmenting Deep Learning Models for Speech Emotion Recognition

Published

Author(s)

Ram Sriram, Dinesh Manocha, Sarala Padi

Abstract

We present a Multi-Window Data Augmentation (MWA-SER) approach for speech emotion recognition. MWA-SER is a unimodal approach that focuses on two key concepts; designing the speech augmentation method and building the deep learning model to recognize the underlying emotion of an audio signal. Our proposed multi-window augmentation approach generates additional data samples from the speech signal by employing multiple window sizes in the audio feature extraction process. We show that our augmentation method, combined with a deep learning model, improves speech emotion recognition performance. We evaluate the performance of our approach on three benchmark datasets: IEMOCAP, SAVEE, and RAVDESS. We show that the multi-window model improves the SER performance and outperforms a single-window model. The notion of finding the best window size is an essential step in audio feature extraction. We perform extensive experimental evaluations to find the best window choice and explore the windowing effect for SER analysis.
Citation
Arxiv

Keywords

Artificial Intelligence, Speech Recognition, Emotion, Testing, Evaluation

Citation

Sriram, R. , Manocha, D. and Padi, S. (2020), Augmenting Deep Learning Models for Speech Emotion Recognition, Arxiv, [online], https://arxiv.org/ (Accessed October 31, 2024)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created October 19, 2020, Updated July 31, 2024