Grounding Action Descriptions in Videos

Michaela Regneri,Marcus Rohrbach,Dominikus Wetzel,Stefan Thater,B. Schiele,Manfred Pinkal

Published 2013 in Transactions of the Association for Computational Linguistics

ABSTRACT

Recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. In this paper, we consider the problem of grounding sentences describing actions in visual information extracted from videos. We present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. Experimental results demonstrate that a text-based model of similarity between actions improves substantially when combined with visual information from videos depicting the described actions.

PUBLICATION RECORD

  • Publication year

    2013

  • Venue

    Transactions of the Association for Computational Linguistics

  • Publication date

    2013-03-31

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-30 of 30 references · Page 1 of 1

CITED BY

Showing 1-100 of 535 citing papers · Page 1 of 6