Improve Decision Tree for Probability-Based Ranking by Lazy Learners

AuthorSearch for: ; Search for:
TypeArticle
ConferenceThe 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI06), November 13-15, 2006., Washington, DC
AbstractExisting work shows that classic decision trees have inherent deficiencies in obtaining a good probability-based ranking (e.g. AUC). This paper aims to improve the ranking performance under decision-tree paradigms by presenting two new models. The intuition behind our work is that probability-based ranking is a relative metric among samples, therefore, distinct probabilities are crucial for accurate ranking. The first model, Lazy Distance-based Tree (LDTree), uses a lazy learner at each leaf to explicitly distinguish the different contributions of leaf samples when estimating the probabilities for an unlabeled sample. The second model, Eager Distance-based Tree (EDTree), improves LDTree by changing it into an eager algorithm. In both models, each unlabeled sample is assigned a set of unique probabilities of class membership instead of a set of uniformed ones, which gives finer resolution to differentiate samples and leads to the improvement of ranking. On 34 UCI sample sets, experiments verify that our models greatly outperform C4.5, C4.4 and other standard smoothing methods designed for better ranking.
Publication date
LanguageEnglish
AffiliationNRC Institute for Information Technology; National Research Council Canada
Peer reviewedNo
NRC number48784
NPARC number5765136
Export citationExport as RIS
Report a correctionReport a correction
Record identifier7756da3d-b54f-471f-b744-b9710ffe79d7
Record created2009-03-29
Record modified2016-05-09
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)