Inferring the three-dimensional structure of objects from monocular images has far-reaching applications in the field of 3D perception. In this paper, we propose a self-supervised network (SSL-Net) to generate 3D point clouds from a single RGB image, unlike the existing work which requires multiple views of the same object to recover the full 3D geometry. To provide the extra self-supervisory signal, the generated 3D model is simultaneously rendered into an image and compared with the input image. In addition, a pose estimation network is integrated into the 3D point cloud generation network to eliminate the pose ambiguity of the input image, and the estimated pose is also used for rendering the 2D image with the same pose as input image from 3D point clouds. The extensive experiments on both real and synthetic datasets show that our method not only qualitatively generates point clouds with more details but also quantitatively outperforms the state-of-the-art in accuracy.
SSL-Net: Point-Cloud Generation Network With Self-Supervised Learning
Ran Sun,Yongbin Gao,Zhijun Fang,Anjie Wang,Cengsi Zhong
Published 2019 in IEEE Access
ABSTRACT
PUBLICATION RECORD
- Publication year
2019
- Venue
IEEE Access
- Publication date
Unknown publication date
- Fields of study
Computer Science, Engineering
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-37 of 37 references · Page 1 of 1
CITED BY
Showing 1-16 of 16 citing papers · Page 1 of 1