f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

Sebastian Nowozin,Botond Cseke,Ryota Tomioka

Published 2016 in Neural Information Processing Systems

ABSTRACT

Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    Neural Information Processing Systems

  • Publication date

    2016-06-02

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-37 of 37 references · Page 1 of 1

CITED BY

Showing 1-100 of 1776 citing papers · Page 1 of 18