Repository logo
  • English
  • Deutsch
  • Español
  • Français
  • Log In
    New user? Click here to register.Have you forgotten your password?

  • English
  • Deutsch
  • Español
  • Français
  • Log In
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • Research Outputs
  • Fundings & Projects
  • Researchers
  • Statistics
  1. Home
  2. Current Research Information System UV
  3. Publicaciones
  4. A Combined CNN Architecture for Speech Emotion Recognition
 
  • Details
Options

A Combined CNN Architecture for Speech Emotion Recognition

Journal
Sensors
ISSN
1424-8220
Date Issued
2024-09-06
DOI
10.3390/s24175797
Abstract
<jats:p>Emotion recognition through speech is a technique employed in various scenarios of Human–Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.</jats:p>
Author(s)
Rolinson Begazo
Aguilera, Ana  
Facultad de Ingeniería  
Irvin Dongo
Yudith Cardinale

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback

Hosting & Support by

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science