The paper presents a technique to improve human detection in still images using deep learning. Our novel method, ViS-HuD, computes visual saliency map from the image. Then the input image is multiplied by the map and product is fed to the Convolutional Neural Network (CNN) which detects humans in the image. A visual saliency map is generated using ML-Net and human detection is carried out using DetectNet. ML-Net is pre-trained on SALICON while, DetectNet is pre-trained on ImageNet database for visual saliency detection and image classification respectively. The CNNs of ViS-HuD were trained on two challenging databases - Penn Fudan and TUD-Brussels Benchmark. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on Penn Fudan Dataset with 91.4% human detection accuracy and it achieves average miss-rate of 53% on the TUD-Brussels benchmark.
ViS-HuD: Using Visual Saliency to Improve Human Detection with Convolutional Neural Networks
Vandit Gajjar,Yash Khandhediya,Ayesha Gurnani,Viraj Mavani,M. Raval
Published 2018 in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- Publication date
2018-02-21
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-57 of 57 references · Page 1 of 1
CITED BY
Showing 1-8 of 8 citing papers · Page 1 of 1