DCPE co-training: Co-training based on diversity of class probability estimation

Document Type

Conference Proceeding

Date of Original Version

12-1-2010

Abstract

Co-training is a semi-supervised learning technique used to recover the unlabeled data based on two base learners. The normal co-training approaches use the most confidently recovered unlabeled data to augment the training data. In this paper, we investigate the co-training approaches with a focus on the diversity issue and propose the diversity of class probability estimation (DCPE) co-training approach. The key idea of the DCPE co-training method is to use DCPE between two base learners to choose the recovered unlabeled data. The results are compared with classic co-training, tri-training and self training methods. Our experimental study based on the UCI benchmark data sets shows that the DCPE co-training is robust and efficient in the classification. © 2010 IEEE.

Publication Title, e.g., Journal

Proceedings of the International Joint Conference on Neural Networks

Share

COinS