A Refining Underlying Information Framework for Speech Enhancement

Rui Cao, Tianrui Wang, Meng Ge, Longbiao Wang, Jianwu Dang
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Sound (cs.SD), Signal Processing (eess.SP)
2023-12-18 00:00:00
The advent of deep neural networks (DNN) has significantly improved the performance of monaural speech enhancement (SE). Most of them attempt to implicitly capture the structural features of speech through distribution approximation. However, existing methods are susceptible to the issues of degraded speech and residual noise. This letter is grounded in the Information Bottleneck as an anchor to rethink the SE. By defining the incremental convergence of mutual information between speech characteristics, we elucidate that the acoustic characteristic of speech is crucial in alleviating the above issues, for its explicit introduction contributes to further approximating the optimal information-theoretic upper bound of the optimization. Referring to the chain rule of entropy, we also propose a framework to reconstruct the information composition of the optimization objective, aiming to integrate and refine this underlying characteristic without loss of generality. The visualization reflects consistency with analysis using information theory. Experimental results show that with only 1.18 M additional parameters, the refined CRN has yielded substantial progress over a number of advanced methods. The source code is available at
PDF: A Refining Underlying Information Framework for Speech Enhancement.pdf
Empowered by ChatGPT