Multimodal skin lesion classification using deep learning
Citations Over TimeTop 10% of 2018 papers
Abstract
While convolutional neural networks (CNNs) have successfully been applied for skin lesion classification, previous studies have generally considered only a single clinical/macroscopic image and output a binary decision. In this work, we have presented a method which combines multiple imaging modalities together with patient metadata to improve the performance of automated skin lesion diagnosis. We evaluated our method on a binary classification task for comparison with previous studies as well as a five class classification task representative of a real-world clinical scenario. We showed that our multimodal classifier outperforms a baseline classifier that only uses a single macroscopic image in both binary melanoma detection (AUC 0.866 vs 0.784) and in multiclass classification (mAP 0.729 vs 0.598). In addition, we have quantitatively showed the automated diagnosis of skin lesions using dermatoscopic images obtains a higher performance when compared to using macroscopic images. We performed experiments on a new data set of 2917 cases where each case contains a dermatoscopic image, macroscopic image and patient metadata.
Related Papers
- → A Method for MBTI Classification Based on Impact of Class Components(2021)31 cited
- → Comparing Techniques for Multiclass Classification Using Binary SVM Predictors(2004)14 cited
- → An experimental approach for prediction of multi-classification using SVM(2021)1 cited
- SVM Classification in Multiclass Letter Recognition System(2013)