Not All Contexts Are Created Equal: Better Word Representations with Variable Attention

Wang Ling,Yulia Tsvetkov,Silvio Amir,Ramón Fernández Astudillo,Chris Dyer,A. Black,Isabel Trancoso,Chu-Cheng Lin

Published 2015 in Conference on Empirical Methods in Natural Language Processing

ABSTRACT

We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    Conference on Empirical Methods in Natural Language Processing

  • Publication date

    2015-09-01

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-23 of 23 references · Page 1 of 1

CITED BY

Showing 1-100 of 141 citing papers · Page 1 of 2