An efficient framework for learning sentence representations

Lajanugen Logeswaran,Honglak Lee

Published 2018 in International Conference on Learning Representations

ABSTRACT

In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and its context, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.

PUBLICATION RECORD

  • Publication year

    2018

  • Venue

    International Conference on Learning Representations

  • Publication date

    2018-02-15

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-50 of 50 references · Page 1 of 1

CITED BY

Showing 1-100 of 565 citing papers · Page 1 of 6