Deep learning regularization techniques to genomics data
Citations Over TimeTop 12% of 2021 papers
Abstract
Deep Learning algorithms have achieved a great success in many domains where large scale datasets are used. However, training these algorithms on high dimensional data requires the adjustment of many parameters. Avoiding overfitting problem is difficult. Regularization techniques such as L1 and L2 are used to prevent the parameters of training model from being large. Another commonly used regularization method called Dropout randomly removes some hidden units during the training phase. In this work, we describe some architectures of Deep Learning algorithms, we explain optimization process for training them and attempt to establish a theoretical relationship between L2-regularization and Dropout. We experimentally compare the effect of these techniques on the learning model using genomics datasets.
Related Papers
- → A Study on Dropout Techniques to Reduce Overfitting in Deep Neural Networks(2020)30 cited
- → Deterministic dropout for deep neural networks using composite random forest(2020)26 cited
- → Continuous Dropout Strategy for Deep Learning Network(2018)2 cited
- → Reduction of Overfitting in Diabetes Prediction Using Deep Learning Neural Network(2017)1 cited
- → Dynamic DNN Nodes Drop-out Rate Analysis(2021)