VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance

Katherine Crowson,Stella Biderman,Daniel Kornis,Dashiell Stander,Eric Hallahan,Louis Castricato,Edward Raff

Published 2022 in European Conference on Computer Vision

ABSTRACT

Generating and editing images from open domain text prompts is a challenging task that heretofore has required expensive and specially trained models. We demonstrate a novel methodology for both tasks which is capable of producing images of high visual quality from text prompts of significant semantic complexity without any training by using a multimodal encoder to guide image generations. We demonstrate on a variety of tasks how using CLIP [37] to guide VQGAN [11] produces higher visual quality outputs than prior, less flexible approaches like DALL-E [38], GLIDE [33] and Open-Edit [24], despite not being trained for the tasks presented. Our code is available in a public repository.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-48 of 48 references · Page 1 of 1

CITED BY

Showing 1-100 of 452 citing papers · Page 1 of 5