View synthesis is an effective way to generate multi-view contents from a limited number of views, and can be utilized for 2-D-to-3-D video conversion, multi-view video compression, and virtual reality. In the view synthesis techniques, depth-image-based rendering (DIBR) is an important method to generate virtual view from video-plus-depth sequence. However, some holes might be produced in the DIBR process. Many hole filling methods have been proposed to tackle this issue, but most of them cannot achieve globally coherent or acquire trusted contents. In this paper, a hole filling method with depth-guided global optimization is proposed for view synthesis. The global optimization is achieved by iterating the spatio-temporal approximate nearest neighbor (ANN) search and video reconstruction step. Directly applying global optimization might introduce some foreground artifacts to the synthesized video. To prevent this problem, some strategies have been developed in this paper. The depth information is applied to guide the spatio-temporal ANN searching and the initialization step is specified in the global optimization procedure. Our experimental results have demonstrated that the proposed method has better performance compared with other methods in terms of visual quality, trusted textures, and temporal consistency in the synthesized video.
Hole Filling for View Synthesis Using Depth Guided Global Optimization
Published 2018 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
IEEE Access
- Publication date
Unknown publication date
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-47 of 47 references · Page 1 of 1
CITED BY
Showing 1-12 of 12 citing papers · Page 1 of 1