There is an increasing concern in computer vision devices invading the privacy of their users by recording unwanted videos. On one hand, we want the camera systems/robots to recognize important events and assist human daily life by understanding its videos, but on the other hand we also want to ensure that they do not intrude people's privacy. In this paper, we propose a new principled approach for learning a video face anonymizer. We use an adversarial training setting in which two competing systems fight: (1) a video anonymizer that modifies the original video to remove privacy-sensitive information (i.e., human face) while still trying to maximize spatial action detection performance, and (2) a discriminator that tries to extract privacy-sensitive information from such anonymized videos. The end result is a video anonymizer that performs a pixel-level modification to anonymize each person's face, with minimal effect on action detection performance. We experimentally confirm the benefit of our approach compared to conventional hand-crafted video/face anonymization methods including masking, blurring, and noise adding. See the project page this https URL for a demo video and more results.
Learning to Anonymize Faces for Privacy Preserving Action Detection
Zhongzheng Ren,Yong Jae Lee,M. Ryoo
Published 2018 in European Conference on Computer Vision
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
European Conference on Computer Vision
- Publication date
2018-03-30
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-60 of 60 references · Page 1 of 1