The past two years have witnessed the increasing prevalence of metaverse, while the COVID-19 pandemic has accelerated the formation of a non-contact culture. Under this circumstance, virtual reality once again attracts the public attention. Panorama video, as one of the most important forms of virtual reality, provides users with excellent immersion experience. This paper has proposed a fusion framework for Ultra HD panorama videos and green screen videos. In this framework, panorama videos are set as the virtual background layer on which the user-defined real foreground layer of portrait obtained by green screen matting is superimposed. During video fusion process, the portrait size is adaptively determined by parameters provided by a person detection algorithm running on panorama videos. Therefore, a more natural video synthesis result can be achieved, and presented on a head-mounted display or a flat screen device to provide an indistinguishable visual experience to the users.
Person Detection Based Adaptive Video Synthesis: A Fusion Framework for Ultra HD Panorama and Green Screen Videos
Gang Wu,Jinjing Dai,Q. Lin,Chang Liu,Jiayi Mi
Published 2022 in 2022 International Conference on Virtual Reality, Human-Computer Interaction and Artificial Intelligence (VRHCIAI)
ABSTRACT
PUBLICATION RECORD
- Publication year
2022
- Venue
2022 International Conference on Virtual Reality, Human-Computer Interaction and Artificial Intelligence (VRHCIAI)
- Publication date
2022-10-01
- Fields of study
Medicine, Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-11 of 11 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1