A new integration method is presented to recognize the emotional expressions. We attempted to use both voices and facial expressions. For voices, we use such prosodic parameters as pitch signals, energy, and their derivatives, which are trained by Hidden Markov Model (HMM) for recognition. For facial expressions, we use feature parameters from thermal images in addition to visible images, which are trained by neural networks (NN) for recognition. The thermal images are observed by infrared ray which is not influenced by lighting conditions. The total recognition rates show better performance than each performance rate obtained from isolated experiment. The results are compared with the recognition by human questionnaire.
Recognition of emotional states using voice, face image and thermal image of face
T. Kitazoe,Sung-Ill Kim,Y. Yoshitomi,Tatsuhiko Ikeda
Published 2000 in Interspeech
ABSTRACT
PUBLICATION RECORD
- Publication year
2000
- Venue
Interspeech
- Publication date
2000-10-16
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-2 of 2 references · Page 1 of 1