Neural Text Generation with Unlikelihood Training

S. Welleck,Ilia Kulikov,Stephen Roller,Emily Dinan,Kyunghyun Cho,J. Weston

Published 2019 in International Conference on Learning Representations

ABSTRACT

Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs. While some post-hoc fixes have been proposed, in particular top-$k$ and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques.

PUBLICATION RECORD

  • Publication year

    2019

  • Venue

    International Conference on Learning Representations

  • Publication date

    2019-08-12

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CONCEPTS

  • beam blocking
    decoding method, baseline

    A decoding baseline that is outperformed by unlikelihood training with standard beam search.

    AK (4715169a40) extraction
  • beam search
    decoding method

    A standard decoding method that benefits from unlikelihood training in this paper.

    Aliases: greedy search

    AK (4715169a40) extraction
  • likelihood training
    method, baseline

    The baseline objective blamed for overproducing repetitive and overly likely text sequences.

    Aliases: standard likelihood training

    AK (4715169a40) extraction
  • neural text generation
    task

    The task of producing open-ended text that this paper aims to make less dull and repetitive.

    Aliases: text generation

    AK (4715169a40) extraction
  • nucleus sampling
    method, decoding strategy

    A sampling-based decoding strategy used as a baseline for open-ended neural text generation.

    Aliases: top-p sampling, dynamic nucleus sampling

    AK (4715169a40) extraction
  • perplexity
    metric

    A language modeling metric that measures how well a model predicts the next token in a text sequence.

    AK (4715169a40) extraction
  • unlikelihood training
    method

    A training objective that lowers probability on undesirable generations such as repetitions.

    Aliases: unlikelihood objective

    AK (4715169a40) extraction

REFERENCES

Showing 1-33 of 33 references · Page 1 of 1

CITED BY

Showing 1-100 of 665 citing papers · Page 1 of 7