Camera traps have revolutionized the animal research of many species that were previously nearly impossible to observe due to their habitat or behavior. They are cameras generally fixed to a tree that take a short sequence of images when triggered. Deep learning has the potential to overcome the workload to automate image classification according to taxon or empty images. However, a standard deep neural network classifier fails because animals often represent a small portion of the high-definition images. That is why we propose a workflow named Weakly Object Detection Faster-RCNN+FPN which suits this challenge. The model is weakly supervised because it requires only the animal taxon label per image but doesn’t require any manual bounding box annotations. First, it automatically performs the weakly-supervised bounding box annotation using the motion from multiple frames. Then, it trains a Faster-RCNN+FPN model using this weak supervision. Experimental results have been obtained with two datasets from a Papua New Guinea and Missouri biodiversity monitoring campaign, then on an easily reproducible testbed.
Weakly Supervised Faster-RCNN+FPN to classify animals in camera trap images
Pierrick Pochelu,Clara Erard,Philippe Cordier,S. Petiton,B. Conche
Published 2022 in International Conference on Image, Video and Signal Processing
ABSTRACT
PUBLICATION RECORD
- Publication year
2022
- Venue
International Conference on Image, Video and Signal Processing
- Publication date
2022-03-18
- Fields of study
Biology, Computer Science, Environmental Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-29 of 29 references · Page 1 of 1
CITED BY
Showing 1-4 of 4 citing papers · Page 1 of 1