Supervised and Unsupervised Learning for Sentence Compression

J. Turner,Eugene Charniak

Published 2005 in Annual Meeting of the Association for Computational Linguistics

ABSTRACT

In Statistics-Based Summarization - Step One: Sentence Compression, Knight and Marcu (Knight and Marcu, 2000) (KM Knight and Marcu use a corpus of 1035 training sentences. More data is not easily available, so in addition to improving the original K&M noisy-channel model, we create unsupervised and semi-supervised models of the task. Finally, we point out problems with modeling the task in this way. They suggest areas for future research.

PUBLICATION RECORD

  • Publication year

    2005

  • Venue

    Annual Meeting of the Association for Computational Linguistics

  • Publication date

    2005-06-25

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

CITED BY

Showing 1-100 of 175 citing papers · Page 1 of 2