background
logo
ArxivPaperAI

Design, construction and evaluation of emotional multimodal pathological speech database

Author:
Ting Zhu, Shufei Duan, Huizhi Liang, Wei Zhang
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Artificial Intelligence (cs.AI), Sound (cs.SD), Signal Processing (eess.SP)
journal:
--
date:
2023-12-14 00:00:00
Abstract
The lack of an available emotion pathology database is one of the key obstacles in studying the emotion expression status of patients with dysarthria. The first Chinese multimodal emotional pathological speech database containing multi-perspective information is constructed in this paper. It includes 29 controls and 39 patients with different degrees of motor dysarthria, expressing happy, sad, angry and neutral emotions. All emotional speech was labeled for intelligibility, types and discrete dimensional emotions by developed WeChat mini-program. The subjective analysis justifies from emotion discrimination accuracy, speech intelligibility, valence-arousal spatial distribution, and correlation between SCL-90 and disease severity. The automatic recognition tested on speech and glottal data, with average accuracy of 78% for controls and 60% for patients in audio, while 51% for controls and 38% for patients in glottal data, indicating an influence of the disease on emotional expression.
PDF: Design, construction and evaluation of emotional multimodal pathological speech database.pdf
Empowered by ChatGPT