Model Selection in Summary Evaluation
Author(s)
Perez-Breva, Luis; Yoshimi, Osamu
DownloadAIM-2002-023.ps (1.659Mb)
Additional downloads
Metadata
Show full item recordAbstract
A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora.
Date issued
2002-12-01Other identifiers
AIM-2002-023
CBCL-222
Series/Report no.
AIM-2002-023CBCL-222
Keywords
AI