background
logo
ArxivPaperAI

Decoding of Selective Attention to Speech From Ear-EEG Recordings

Author:
Mike Thornton, Danilo Mandic, Tobias Reichenbach
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS)
journal:
--
date:
2024-01-10 00:00:00
Abstract
Many people with hearing loss struggle to comprehend speech in crowded auditory scenes, even when they are using hearing aids. Future hearing technologies which can identify the focus of a listener's auditory attention, and selectively amplify that sound alone, could improve the experience that this patient group has with their hearing aids. In this work, we present the results of our experiments with an ultra-wearable in-ear electroencephalography (EEG) monitoring device. Participants listened to two competing speakers in an auditory attention experiment whilst their EEG was recorded. We show that typical neural responses to the speech envelope, as well as its onsets, can be recovered from such a device, and that the morphology of the recorded responses is indeed modulated by selective attention to speech. Features of the attended and ignored speech stream can also be reconstructed from the EEG recordings, with the reconstruction quality serving as a marker of selective auditory attention. Using the stimulus-reconstruction method, we show that with this device auditory attention can be decoded from short segments of EEG recordings which are of just a few seconds in duration. The results provide further evidence that ear-EEG systems offer good prospects for wearable auditory monitoring as well as future cognitively-steered hearing aids.
PDF: Decoding of Selective Attention to Speech From Ear-EEG Recordings.pdf
Empowered by ChatGPT