How Well Do Distributional Models Capture Different Types of Semantic Knowledge?

Dana Rubinstein,Effi Levi,Roy Schwartz,A. Rappoport

Published 2015 in Annual Meeting of the Association for Computational Linguistics

ABSTRACT

In recent years, distributional models (DMs) have shown great success in representing lexical semantics. In this work we show that the extent to which DMs represent semantic knowledge is highly dependent on the type of knowledge. We pose the task of predicting properties of concrete nouns in a supervised setting, and compare between learning taxonomic properties (e.g., animacy) and attributive properties (e.g., size, color). We employ four state-of-the-art DMs as sources of feature representation for this task, and show that they all yield poor results when tested on attributive properties, achieving no more than an average F-score of 0.37 in the binary property prediction task, compared to 0.73 on taxonomic properties. Our results suggest that the distributional hypothesis may not be equally applicable to all types of semantic information.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    Annual Meeting of the Association for Computational Linguistics

  • Publication date

    2015-07-01

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-21 of 21 references · Page 1 of 1

CITED BY

Showing 1-81 of 81 citing papers · Page 1 of 1