SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View Representation

Kavli Affiliate: Feng Wang

| First 5 Authors: Zhengxin Lei, Feng Xu, Jiangtao Wei, Feng Cai, Feng Wang

| Summary:

SAR images are highly sensitive to observation configurations, and they
exhibit significant variations across different viewing angles, making it
challenging to represent and learn their anisotropic features. As a result,
deep learning methods often generalize poorly across different view angles.
Inspired by the concept of neural radiance fields (NeRF), this study combines
SAR imaging mechanisms with neural networks to propose a novel NeRF model for
SAR image generation. Following the mapping and projection pinciples, a set of
SAR images is modeled implicitly as a function of attenuation coefficients and
scattering intensities in the 3D imaging space through a differentiable
rendering equation. SAR-NeRF is then constructed to learn the distribution of
attenuation coefficients and scattering intensities of voxels, where the
vectorized form of 3D voxel SAR rendering equation and the sampling
relationship between the 3D space voxels and the 2D view ray grids are
analytically derived. Through quantitative experiments on various datasets, we
thoroughly assess the multi-view representation and generalization capabilities
of SAR-NeRF. Additionally, it is found that SAR-NeRF augumented dataset can
significantly improve SAR target classification performance under few-shot
learning setup, where a 10-type classification accuracy of 91.6% can be
achieved by using only 12 images per class.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=10

Read More