Twin neural network regression
Citations Over Time
Abstract
Abstract We introduce twin neural network regression (TNNR). This method predicts differences between the target values of two different data points rather than the targets themselves. The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points. Whereas ensembles are normally costly to produce, TNNR intrinsically creates an ensemble of predictions of twice the size of the training set while only training a single neural network. Since ensembles have been shown to be more accurate than single models this property naturally transfers to TNNR. We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared with other state‐of‐the‐art methods. Furthermore, TNNR is constrained by self‐consistency conditions. We find that the violation of these conditions provides a signal for the prediction uncertainty.
Related Papers
- → On the use of regression analysis for the estimation of human biological age(2000)49 cited
- → The impact of different training data set on the accuracy of sentiment classification of Naïve Bayes technique(2017)3 cited
- The Effect of Detrending When Computing Regression Coefficients(2002)
- → Recommending Training Set Sizes for Classification(2021)6 cited
- → Perceptual speed or executive function in aging(2015)