background
logo
ArxivPaperAI

Semi-Supervised Multimodal Multi-Instance Learning for Aortic Stenosis Diagnosis

Author:
Zhe Huang, Xiaowei Yu, Benjamin S. Wessler, Michael C. Hughes
Keyword:
Computer Science, Computer Vision and Pattern Recognition, Computer Vision and Pattern Recognition (cs.CV), Emerging Technologies (cs.ET), Machine Learning (cs.LG)
journal:
--
date:
2024-03-09 00:00:00
Abstract
Automated interpretation of ultrasound imaging of the heart (echocardiograms) could improve the detection and treatment of aortic stenosis (AS), a deadly heart disease. However, existing deep learning pipelines for assessing AS from echocardiograms have two key limitations. First, most methods rely on limited 2D cineloops, thereby ignoring widely available Doppler imaging that contains important complementary information about pressure gradients and blood flow abnormalities associated with AS. Second, obtaining labeled data is difficult. There are often far more unlabeled echocardiogram recordings available, but these remain underutilized by existing methods. To overcome these limitations, we introduce Semi-supervised Multimodal Multiple-Instance Learning (SMMIL), a new deep learning framework for automatic interpretation for structural heart diseases like AS. When deployed, SMMIL can combine information from two input modalities, spectral Dopplers and 2D cineloops, to produce a study-level AS diagnosis. During training, SMMIL can combine a smaller labeled set and an abundant unlabeled set of both modalities to improve its classifier. Experiments demonstrate that SMMIL outperforms recent alternatives at 3-level AS severity classification as well as several clinically relevant AS detection tasks.
PDF: Semi-Supervised Multimodal Multi-Instance Learning for Aortic Stenosis Diagnosis.pdf
Empowered by ChatGPT