Kavli Affiliate: Zheng Zhu
| First 5 Authors: Yunpeng Zhang, Zheng Zhu, Dalong Du
| Summary:
The vision-based perception for autonomous driving has undergone a
transformation from the bird-eye-view (BEV) representations to the 3D semantic
occupancy. Compared with the BEV planes, the 3D semantic occupancy further
provides structural information along the vertical direction. This paper
presents OccFormer, a dual-path transformer network to effectively process the
3D volume for semantic occupancy prediction. OccFormer achieves a long-range,
dynamic, and efficient encoding of the camera-generated 3D voxel features. It
is obtained by decomposing the heavy 3D processing into the local and global
transformer pathways along the horizontal plane. For the occupancy decoder, we
adapt the vanilla Mask2Former for 3D semantic occupancy by proposing
preserve-pooling and class-guided sampling, which notably mitigate the sparsity
and class imbalance. Experimental results demonstrate that OccFormer
significantly outperforms existing methods for semantic scene completion on
SemanticKITTI dataset and for LiDAR semantic segmentation on nuScenes dataset.
Code is available at url{https://github.com/zhangyp15/OccFormer}.
| Search Query: [#feed_custom_title]