Early results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-Enhanced Lexicons

A. McCallum,Wei Li

Published 2003 in Conference on Computational Natural Language Learning

ABSTRACT

Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; Borthwick et al., 1998).

PUBLICATION RECORD

  • Publication year

    2003

  • Venue

    Conference on Computational Natural Language Learning

  • Publication date

    2003-05-31

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

CITED BY

Showing 1-100 of 1370 citing papers · Page 1 of 14