Multimodal Few-Shot Learning with Frozen Language Models

M. Tsimpoukelli,Jacob Menick,Serkan Cabi,S. Eslami,O. Vinyals,Felix Hill,Zacharias Janssen

Published 2021 in Neural Information Processing Systems

ABSTRACT

When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Using aligned image and caption data, we train a vision encoder to represent each image as a sequence of continuous embeddings, such that a pre-trained, frozen language model prompted with this prefix generates the appropriate caption. The resulting system is a multimodal few-shot learner, with the surprising ability to learn a variety of new tasks when conditioned on examples, represented as a sequence of multiple interleaved image and text embeddings. We demonstrate that it can rapidly learn words for new objects and novel visual categories, do visual question-answering with only a handful of examples, and make use of outside knowledge, by measuring a single model on a variety of established and new benchmarks.

PUBLICATION RECORD

  • Publication year

    2021

  • Venue

    Neural Information Processing Systems

  • Publication date

    2021-06-25

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-43 of 43 references · Page 1 of 1

CITED BY

Showing 1-100 of 931 citing papers · Page 1 of 10