background
logo
ArxivPaperAI

Attention-Driven Multichannel Speech Enhancement in Moving Sound Source Scenarios

Author:
Yuzhu Wang, Archontis Politis, Tuomas Virtanen
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Machine Learning (cs.LG), Signal Processing (eess.SP)
journal:
--
date:
2023-12-17 00:00:00
Abstract
Current multichannel speech enhancement algorithms typically assume a stationary sound source, a common mismatch with reality that limits their performance in real-world scenarios. This paper focuses on attention-driven spatial filtering techniques designed for dynamic settings. Specifically, we study the application of linear and nonlinear attention-based methods for estimating time-varying spatial covariance matrices used to design the filters. We also investigate the direct estimation of spatial filters by attention-based methods without explicitly estimating spatial statistics. The clean speech clips from WSJ0 are employed for simulating speech signals of moving speakers in a reverberant environment. The experimental dataset is built by mixing the simulated speech signals with multichannel real noise from CHiME-3. Evaluation results show that the attention-driven approaches are robust and consistently outperform conventional spatial filtering approaches in both static and dynamic sound environments.
PDF: Attention-Driven Multichannel Speech Enhancement in Moving Sound Source Scenarios.pdf
Empowered by ChatGPT