Generative Adversarial Text to Image Synthesis

Scott E. Reed,Zeynep Akata,Xinchen Yan,Lajanugen Logeswaran,B. Schiele,Honglak Lee

Published 2016 in International Conference on Machine Learning

ABSTRACT

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    International Conference on Machine Learning

  • Publication date

    2016-05-17

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CONCEPTS

REFERENCES

Showing 1-41 of 41 references · Page 1 of 1

CITED BY

Showing 1-100 of 3363 citing papers · Page 1 of 34