Fully automatic expression-invariant face correspondence

  1. Get@NRC: Fully automatic expression-invariant face correspondence (Opens in a new window)
DOIResolve DOI: http://doi.org/10.1007/s00138-013-0579-9
AuthorSearch for: ; Search for: ; Search for: ; Search for:
Journal titleMachine Vision and Applications
Pages121; # of pages: 21
AbstractWe consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions. Our fully automatic approach does not require any manually placed markers on the scan. Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan. The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan. To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model. Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions. Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models. © 2013 Springer-Verlag Berlin Heidelberg.
Publication date
AffiliationNational Research Council Canada (NRC-CNRC)
Peer reviewedYes
NPARC number21270717
Export citationExport as RIS
Report a correctionReport a correction
Record identifierc44a3083-a9b7-4bf4-b60a-4081e5b4439b
Record created2014-02-17
Record modified2016-05-09
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)