Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations

Gino Brunner,Yuyi Wang,Roger Wattenhofer,Michael Weigelt

Published 2018 in Neural Information Processing Systems

ABSTRACT

We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as the representation space becomes less entangled. We explore the structure of the representation space by interpolating between sentences, which yields interesting pseudo-English sentences, many of which have recognizable syntactic structure. Lastly, we point out an interesting property of our models: The difference-vector between two sentences can be added to change a third sentence with similar features in a meaningful way.

PUBLICATION RECORD

  • Publication year

    2018

  • Venue

    Neural Information Processing Systems

  • Publication date

    2018-01-18

  • Fields of study

    Mathematics, Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.