Tri-training: exploiting unlabeled data using three classifiers
Citations Over TimeTop 1% of 2005 papers
Abstract
In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance.
Related Papers
- → Tri-training: exploiting unlabeled data using three classifiers(2005)1,162 cited
- → A random subspace method for co-training(2008)62 cited
- → Improved Tri-training with Unlabeled Data(2012)20 cited
- → Semi-supervised learning for natural language processing(2008)9 cited
- → Reinforced Co-Training(2018)2 cited