Over the recent years, a number of deep learning approaches are successfully introduced to tackle the problem of image in-painting for achieving better perceptual effects. However, there still exist obvious hole-edge artifacts in these deep learning-based approaches, which need to be rectified before they become useful for practical applications. In this article, we propose an iteration-driven in-painting approach, which combines the deep context model with the backpropagation mechanism to fine-tune the learning-based in-painting process and hence, achieves further improvement over the existing state of the arts. Our iterative approach fine tunes the image generated by a pretrained deep context model via backpropagation using a weighted context loss. Extensive experiments on public available test sets, including the CelebA, Paris Streets, and PASCAL VOC 2012 dataset, show that our proposed method achieves better visual perceptual quality in terms of hole-edge artifacts compared with the state-of-the-art in-painting methods using various context models.
Fine Tuning of Deep Contexts Toward Improved Perceptual Quality of In-Paintings
Qinglong Chang,Kwok-Wai Hung,Jianmin Jiang
Published 2021 in IEEE Transactions on Cybernetics
ABSTRACT
PUBLICATION RECORD
- Publication year
2021
- Venue
IEEE Transactions on Cybernetics
- Publication date
2021-09-08
- Fields of study
Medicine, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-26 of 26 references · Page 1 of 1
CITED BY
Showing 1-1 of 1 citing papers · Page 1 of 1