The unsymmetrical-style co-training

  1. Get@NRC: The unsymmetrical-style co-training (Opens in a new window)
DOIResolve DOI:
AuthorSearch for: ; Search for: ; Search for: ; Search for:
TypeBook Chapter
Proceedings titleAdvances in Knowledge Discovery and Data Mining : 15th Pacific-Asia Conference, PAKDD 2011, Shenzhen, China, May 24-27, 2011, Proceedings, Part I
Series titleLecture Notes In Computer Science; Volume 6634
Conference15th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2011), May 24-27, 2011, Shenzhen, China
Pages100111; # of pages: 12
SubjectClass labels; Co-training; Co-training algorithm; Conditional independence assumption; Early convergence; Framework structures; Labeled data; Learning capabilities; Self-training; Semi-supervised learning; Semi-supervised learning methods; Unlabeled data; Unsymmetrical structure; Algorithms; Data mining; Steel beams and girders; Supervised learning; Convergence of numerical methods
AbstractSemi-supervised learning has attracted much attention over the past decade because it provides the advantage of combining unlabeled data with labeled data to improve the learning capability of models. Co-training is a representative paradigm of semi-supervised learning methods. Typically, some co-training style algorithms, such as co-training and co-EM, learn two classifiers based on two views of the instance space. But they have to satisfy the assumptions that these two views are sufficient and conditionally independent given the class labels. Other co-training style algorithms, such as multiple-learner, use two different underlying classifiers based on only a single view of the instance space. However, they could not utilize the labeled data effectively, and suffer from the early convergence. After analyzing various co-training style algorithms, we have found that all of these algorithms have symmetrical framework structures that are related to their constraints. In this paper, we propose a novel unsymmetrical-style method, which we call the unsymmetrical co-training algorithm. The unsymmetrical co-training algorithm combines the advantages of other co-training style algorithms and overcomes their disadvantages. Within our unsymmetrical structure, we apply two unsymmetrical classifiers, namely, the self-training classifier and the EM classifier, and then train these two classifiers in an unsymmetrical way. The unsymmetrical co-training algorithm not only avoids the constraint of the conditional independence assumption, but also overcomes the flaws of the early convergence and the ineffective utilization of labeled data. We conduct experiments to compare the performances of these co-training style algorithms. From the experimental results, we can see that the unsymmetrical co-training algorithm outperforms other co-training algorithms.
Publication date
PublisherSpringer Berlin Heidelberg
AffiliationNational Research Council Canada (NRC-CNRC)
Peer reviewedYes
NPARC number21271771
Export citationExport as RIS
Report a correctionReport a correction
Record identifier5beb051c-503e-42cc-afe3-e51ff921c782
Record created2014-03-24
Record modified2016-07-19
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)