Fast and accurate view classification of echocardiograms using deep learning
Citations Over TimeTop 1% of 2018 papers
Abstract
Echocardiography is essential to cardiology. However, the need for human interpretation has limited echocardiography's full potential for precision medicine. Deep learning is an emerging tool for analyzing images but has not yet been widely applied to echocardiograms, partly due to their complex multi-view format. The essential first step toward comprehensive computer-assisted echocardiographic interpretation is determining whether computers can learn to recognize these views. We trained a convolutional neural network to simultaneously classify 15 standard views (12 video, 3 still), based on labeled still images and videos from 267 transthoracic echocardiograms that captured a range of real-world clinical variation. Our model classified among 12 video views with 97.8% overall test accuracy without overfitting. Even on single low-resolution images, accuracy among 15 views was 91.7% vs. 70.2-84.0% for board-certified echocardiographers. Data visualization experiments showed that the model recognizes similarities among related views and classifies using clinically relevant image features. Our results provide a foundation for artificial intelligence-assisted echocardiographic interpretation.
Related Papers
- → Implementing convolutional neural network model for prediction in medical imaging(2022)6 cited
- → Lazy Overfitting Control(2013)10 cited
- → Challenges of Deep Learning Methods for COVID-19 Detection Using Public Datasets(2020)10 cited
- → Machine Learning Students Overfit to Overfitting(2022)5 cited
- → Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models(2022)3 cited