LiDAR upsampling aims to increase the resolution of sparse point sets obtained from low-cost sensors, providing better performance for various downstream tasks (e.g., Autonomous Driving, High Definitation Map). Existing methods transfer LiDAR point cloud into range view, and focus on designing complex encoders or interpolation strategies to improve the resolution of LiDAR range images. However, our analysis shows that using the range view inevitably results in the loss of geometric information. We propose a Variable-View Implicit LiDAR Upsampling network, named WIN to solve this problem. It decouples range views into two novel virtual view representations, Horizontal Range View (HRV) and Vertical Range View (VRV). The key idea behind this is that introducing more perspectives can make up for the geometric information lost in a single perspective. We also prove theoretically that the proposed virtual view representation has a smaller error range compared to the range view representation. In addition, we design two novel strategies (i.e., contrast selection module and selection loss) to fuse the upsampling results of these two virtual representations and stabilize the whole training process. As a result, compared with the current state-of-the art (SOTA) method ILN, WIN introduces only 0.4M additional parameters, yet achieves a +4.53% increase in the MAE and a +7.01% increase in the IoU on the CARLA dataset. Furthermore, our method also outperforms all existing methods in downstream tasks (i.e., Depth Completion and Localization). The code and pre-trained models are available at https://github.com/WHU-USI3DV/WIN
WIN: Variable-View Implicit LiDAR Upsampling Network
Conglang Zhang,Chen Long,Hang Xu,Wenxiao Zhang,Z. Dong,Bisheng Yang
Published 2025 in IEEE transactions on intelligent transportation systems (Print)
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
IEEE transactions on intelligent transportation systems (Print)
- Publication date
2025-11-01
- Fields of study
Computer Science, Engineering, Environmental Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-57 of 57 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1