Do Supervised Distributional Methods Really Learn Lexical Inference Relations?

Omer Levy,Steffen Remus,Chris Biemann,Ido Dagan

Published 2015 in North American Chapter of the Association for Computational Linguistics

ABSTRACT

Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these state-of-the-art methods, and show that they do not actually learn a relation between two words. Instead, they learn an independent property of a single word in the pair: whether that word is a “prototypical hypernym”.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    North American Chapter of the Association for Computational Linguistics

  • Publication date

    Unknown publication date

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-28 of 28 references · Page 1 of 1

CITED BY

Showing 1-100 of 243 citing papers · Page 1 of 3