Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT based models

Published

Author(s)

Sarala Padi, Omid Sadjadi, Dinesh Manocha, Ram Sriram

Abstract

Automatic emotion recognition plays a key role in computer-human interaction as it has the potential to enrich the next generation artificial intelligence with emotional intelligence. It finds applications in customer and/or representative behavior analysis in call centers, gaming, personal assistants, and social robots, to mention a few. Therefore, there has been an increasing demand to develop robust automatic methods to analyze and recognize the various emotions. In this paper, we propose a neural network based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities. More specifically, we i) adapt a residual network (ResNet) based model trained on a large-scale speaker recognition task using transfer learning along with a spectrogram augmentation approach to recognize emotions from speech, and ii) use a fine-tuned bidirectional encoder representations from transformers (BERT) based model to represent and recognize emotions from text. The proposed system then combines the Resnet and BERT based model scores using a late fusion strategy to further improve the emotion recognition performance. The proposed multimodal solution addresses the data scarcity limitation in emotion recognition using transfer learning, data augmentation, and fine-tuning, thereby improving the generalization performance of the emotion recognition models. We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture (IEMOCAP) dataset. Experimental results indicate that both audio and text based models improve the emotion recognition performance, and that the proposed multimodal solution achieves state-of-the-art results on the IEMOCAP benchmark.
Proceedings Title
Odyssey 2022: The Speaker and Language Recognition Workshop
Conference Dates
June 28-July 1, 2022
Conference Location
Beijing, CN

Keywords

multimodal emotion recognition, speech emotion recognition, Transfer Learning, BERT transformer, speaker recognition, IEMOCAP

Citation

Padi, S. , Sadjadi, O. , Manocha, D. and Sriram, R. (2022), Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT based models, Odyssey 2022: The Speaker and Language Recognition Workshop, Beijing, CN, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934159 (Accessed September 26, 2024)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created July 1, 2022, Updated July 31, 2024