Image search using multilingual texts: a cross-modal learning approach between image and text Maxime Portaz Qwant Research

Maxime Portaz,Hicham Randrianarivo,A. Nivaggioli,Estelle Maudet,Christophe Servan,Sylvain Peyronnet

Published 2019 in arXiv.org

ABSTRACT

Multilingual (or cross-lingual) embeddings represent several languages in a unique vector space. Using a common embedding space enables for a shared semantic between words from different languages. In this paper, we propose to embed images and texts into a unique distributional vector space, enabling to search images by using text queries expressing information needs related to the (visual) content of images, as well as using image similarity. Our framework forces the representation of an image to be similar to the representation of the text that describes it. Moreover, by using multilingual embeddings we ensure that words from two different languages have close descriptors and thus are attached to similar images. We provide experimental evidence of the efficiency of our approach by experimenting it on two datasets: Common Objects in COntext (COCO) [19] and Multi30K [7].

PUBLICATION RECORD

  • Publication year

    2019

  • Venue

    arXiv.org

  • Publication date

    2019-03-21

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-33 of 33 references · Page 1 of 1