Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos

A. Gupta,Praveen Srinivasan,Jianbo Shi,L. Davis

Published 2009 in 2009 IEEE Conference on Computer Vision and Pattern Recognition

ABSTRACT

Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story/plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data.

PUBLICATION RECORD

  • Publication year

    2009

  • Venue

    2009 IEEE Conference on Computer Vision and Pattern Recognition

  • Publication date

    2009-06-20

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-26 of 26 references · Page 1 of 1

CITED BY

Showing 1-100 of 299 citing papers · Page 1 of 3