Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors

Marco Baroni,Georgiana Dinu,Germán Kruszewski

Published 2014 in Annual Meeting of the Association for Computational Linguistics

ABSTRACT

Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.

PUBLICATION RECORD

  • Publication year

    2014

  • Venue

    Annual Meeting of the Association for Computational Linguistics

  • Publication date

    2014-06-01

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-48 of 48 references · Page 1 of 1

CITED BY

Showing 1-100 of 1507 citing papers · Page 1 of 16