Ecological validity and the evaluation of speech summarization quality

  1. (PDF, 325 KB)
AuthorSearch for: ; Search for: ; Search for: ; Search for:
Proceedings titleProceeding of the Workshop on Evaluation Metrics and System Comparison for Automatic Summarization
ConferenceNAACL-HLT 2012 Workshop on Evaluation Metrics and System Comparison for Automatic Summarization, Montreal, Quebec, Canada, 3-8 June 2012
Pages2835; # of pages: 8
AbstractThere is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the wellknown baseline of maximal marginal relevance (Carbonell and Goldstein, 1998) is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. In addition, ROUGE scores and, in particular, the contextfree annotations that are often supplied to ROUGE as references, may not always be reliable as inexpensive proxies for ecologically valid evaluations. In fact, under some conditions, relying exclusively on ROUGE may even lead to scoring human-generated summaries that are inconsistent in their usefulness relative to using no summaries very favourably.
Publication date
AffiliationNational Research Council Canada; Information and Communication Technologies
Peer reviewedYes
NPARC number20262886
Export citationExport as RIS
Report a correctionReport a correction
Record identifier55ccecc7-a877-4f4e-a5bc-da5e66b948c2
Record created2012-07-10
Record modified2016-05-09
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)