background
logo
ArxivPaperAI

An Explainable Proxy Model for Multiabel Audio Segmentation

Author:
Théo Mariotte, Antonio Almudévar, Marie Tahon, Alsonfo Ortega
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Sound (cs.SD), Signal Processing (eess.SP)
journal:
AA001
date:
2024-01-16 00:00:00
Abstract
Audio signal segmentation is a key task for automatic audio indexing. It consists of detecting the boundaries of class-homogeneous segments in the signal. In many applications, explainable AI is a vital process for transparency of decision-making with machine learning. In this paper, we propose an explainable multilabel segmentation model that solves speech activity (SAD), music (MD), noise (ND), and overlapped speech detection (OSD) simultaneously. This proxy uses the non-negative matrix factorization (NMF) to map the embedding used for the segmentation to the frequency domain. Experiments conducted on two datasets show similar performances as the pre-trained black box model while showing strong explainability features. Specifically, the frequency bins used for the decision can be easily identified at both the segment level (local explanations) and global level (class prototypes).
PDF: An Explainable Proxy Model for Multiabel Audio Segmentation.pdf
Empowered by ChatGPT