background
logo
ArxivPaperAI

A Closer Look at Wav2Vec2 Embeddings for On-Device Single-Channel Speech Enhancement

Author:
Ravi Shankar, Ke Tan, Buye Xu, Anurag Kumar
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Artificial Intelligence (cs.AI), Machine Learning (cs.LG)
journal:
--
date:
2024-03-03 00:00:00
Abstract
Self-supervised learned models have been found to be very effective for certain speech tasks such as automatic speech recognition, speaker identification, keyword spotting and others. While the features are undeniably useful in speech recognition and associated tasks, their utility in speech enhancement systems is yet to be firmly established, and perhaps not properly understood. In this paper, we investigate the uses of SSL representations for single-channel speech enhancement in challenging conditions and find that they add very little value for the enhancement task. Our constraints are designed around on-device real-time speech enhancement -- model is causal, the compute footprint is small. Additionally, we focus on low SNR conditions where such models struggle to provide good enhancement. In order to systematically examine how SSL representations impact performance of such enhancement models, we propose a variety of techniques to utilize these embeddings which include different forms of knowledge-distillation and pre-training.
PDF: A Closer Look at Wav2Vec2 Embeddings for On-Device Single-Channel Speech Enhancement.pdf
Empowered by ChatGPT