Revisiting Embedding Features for Simple Semi-supervised Learning

Jiang Guo,Wanxiang Che,Haifeng Wang,Ting Liu

Published 2014 in Conference on Empirical Methods in Natural Language Processing

ABSTRACT

Recent work has shown success in using continuous word embeddings learned from unlabeled data as features to improve supervised NLP systems, which is regarded as a simple semi-supervised learning mechanism. However, fundamental problems on effectively incorporating the word embedding features within the framework of linear models remain. In this study, we investigate and analyze three different approaches, including a new proposed distributional prototype approach, for utilizing the embedding features. The presented approaches can be integrated into most of the classical linear models in NLP. Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word embedding features, among which the distributional prototype approach performs the best. Moreover, the combination of the approaches provides additive improvements, outperforming the dense and continuous embedding features by nearly 2 points of F1 score.

PUBLICATION RECORD

  • Publication year

    2014

  • Venue

    Conference on Empirical Methods in Natural Language Processing

  • Publication date

    2014-10-01

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-30 of 30 references · Page 1 of 1

CITED BY

Showing 1-100 of 126 citing papers · Page 1 of 2