Control of sparseness for feature selection

  1. Get@NRC: Control of sparseness for feature selection (Opens in a new window)
DOIResolve DOI:
AuthorSearch for: ; Search for: ; Search for: ; Search for:
TypeBook Chapter
Proceedings titleStructural, Syntactic, and Statistical Pattern Recognition : Joint IAPR International Workshops, SSPR 2004 and SPR 2004, Lisbon, Portugal, August 18-20, 2004. Proceedings
Series titleLecture Notes In Computer Science; Volume 3138
ConferenceJoint IAPR International Workshops on Structural and Syntactical Pattern Recognition (SSPR 2004) and Statistical Pattern Recognition (SPR 2004), August 18-20, 2004, Lisbon, Portugal
Pages707715; # of pages: 9
AbstractIn linear discriminant (LD) analysis high sample size/feature ratio is desirable. The linear programming procedure (LP) for LD identification handles the curse of dimensionality through simultaneous minimization of the L1 norm of the classification errors and the LD weights. The sparseness of the solution – the fraction of features retained – can be controlled by a parameter in the objective function. By qualitatively analyzing the objective function and the constraints of the problem, we show why sparseness arises. In a sparse solution, large values of the LD weight vector reveal those individual features most important for the decision boundary.
Publication date
PublisherSpringer Berlin Heidelberg
AffiliationNRC Institute for Biodiagnostics; National Research Council Canada
Peer reviewedYes
NRC number2139
NPARC number9147559
Export citationExport as RIS
Report a correctionReport a correction
Record identifiercbdae675-bf2f-48f7-8a3a-9f4c94013e2d
Record created2009-06-25
Record modified2016-06-21
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)