Training deep neural networks on imbalanced data sets
Citations Over TimeTop 1% of 2016 papers
Abstract
Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.
Related Papers
- → Root mean square error (RMSE) or mean absolute error (MAE)? – Arguments against avoiding RMSE in the literature(2014)5,796 cited
- → Morphological focus marking in Gùrùntùm (West Chadic)(2009)82 cited
- → Deep fake Detection Through Deep Learning(2023)4 cited
- Using Discourse Focus, Temporal Focus, and Spatial Focus to Generate Multisentential Text.(1990)
- Why & When Deep Learning Works: Looking Inside Deep Learnings.(2017)