We draw on a simulationist approach to the analysis of facially displayed emotions, e.g., in the course of a face-to-face interaction between an expresser and an observer. At the heart of such perspective lies the enactment of the perceived emotion in the observer. We propose a novel probabilistic framework based on a deep latent representation of a continuous affect space, which can be exploited for both the estimation and the enactment of affective states in a multimodal space (visible facial expressions and physiological signals). The rationale behind the approach lies in the large body of evidence from affective neuroscience showing that when we observe emotional facial expressions, we react with congruent facial mimicry. Further, in more complex situations, affect understanding is likely to rely on a comprehensive representation grounding the reconstruction of the state of the body associated with the displayed emotion. We show that our approach can address such problems in a unified and principled perspective, thus avoiding ad hoc heuristics while minimizing learning efforts.
Deep Construction of an Affective Latent Space via Multimodal Enactment
Giuseppe Boccignone,Donatello Conte,Vittorio Cuculo,Alessandro D'amelio,G. Grossi,Raffaella Lanzarotti
Published 2018 in IEEE Transactions on Cognitive and Developmental Systems
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
IEEE Transactions on Cognitive and Developmental Systems
- Publication date
2018-12-01
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-88 of 88 references · Page 1 of 1
CITED BY
Showing 1-19 of 19 citing papers · Page 1 of 1