Learning in a large committee machine: worst case and average case
Europhysics Letters (EPL)1996Vol. 35(7), pp. 553–558
Citations Over Time
Abstract
Learning of realizable rules is studied for tree committee machines with continuous weights. No nontrivial upper bound exists for the generalization error of consistent students as the number of hidden units K increases. However, numerical considerations show that consistent students with a value of the generalization error significantly higher than predicted by the average-case analysis are extremely hard to find. An on-line learning algorithm is presented, for which the generalization error scales with the training set size as in the average-case theory in the limit of large K.
Related Papers
- → Active Learning Based on New Localized Generalization Error Model for Training RBFNN(2010)2 cited
- → Learning in a large committee machine: worst case and average case(1996)8 cited
- → Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information(2021)2 cited
- The bounds of change-one generalization error under united algorithmic(2005)