Human Emotion Recognition Based on Voice Analysis

  • Alexandra Voinea "Dunarea de Jos" University of Galati
  • Violeta-Cornelia Vasilenco "Dunarea de Jos" University of Galati
  • Razvan-Adrian Tudoran "Dunarea de Jos" University of Galati
  • Andrei Tănase "Dunarea de Jos" University of Galati
Keywords: emotion recognition, voice processing, convolutional neural networks, machine learning, TensorFlow library

Abstract

This paper presents an experiment in human emotion recognition based on voice analysis. The audio signals generated by capturing the human voice with a microphone are input into an audio processing system that creates image files containing the spectrograms of the corresponding signals. Assuming that certain emotions produce recognizable alterations of the spectral composition of the voice signals, the respective spectrogram images were classified using a convolutional neural network. The training data set consisted in 2407 recordings created by 24 actors displaying a palette of emotional states ranging from neutral, calm, to happy, angry, scared, disgusted and surprised. The overall accuracy of the recognition system was quite modest (around 20%), but the implementation based on the open source TensorFlow library for machine learning is worth attention.

Published
2019-02-27
How to Cite
1.
Voinea A, Vasilenco V-C, Tudoran R-A, Tănase A. Human Emotion Recognition Based on Voice Analysis. The Annals of “Dunarea de Jos“ University of Galati. Fascicle III, Electrotechnics, Electronics, Automatic Control, Informatics [Internet]. 27Feb.2019 [cited 5May2024];41(2):13-9. Available from: https://www.gup.ugal.ro/ugaljournals/index.php/eeaci/article/view/222
Section
Articles

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.