DAiSEE: Towards User Engagement Recognition in the Wild.

Abhay Gupta,Arjun D'Cunha,Kamal N. Awasthi,V. Balasubramanian

Published 2016 in arXiv: Computer Vision and Pattern Recognition

ABSTRACT

We introduce DAiSEE, the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration in the wild. The dataset has four levels of labels namely - very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. We have also established benchmark results on this dataset using state-of-the-art video classification methods that are available today. We believe that DAiSEE will provide the research community with challenges in feature extraction, context-based inference, and development of suitable machine learning methods for related tasks, thus providing a springboard for further research. The dataset is available for download at this https URL

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    arXiv: Computer Vision and Pattern Recognition

  • Publication date

    2016-09-07

  • Fields of study

    Computer Science, Psychology

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-70 of 70 references · Page 1 of 1

CITED BY

Showing 1-100 of 177 citing papers · Page 1 of 2