Sparse Overcomplete Word Vector Representations

Manaal Faruqui,Yulia Tsvetkov,Dani Yogatama,Chris Dyer,Noah A. Smith

Published 2015 in Annual Meeting of the Association for Computational Linguistics

ABSTRACT

Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-61 of 61 references · Page 1 of 1

CITED BY

Showing 1-100 of 211 citing papers · Page 1 of 3