Multi-modal Sensor Fusion for Auto Driving Perception: A Survey

Kavli Affiliate: Li Xin Li

| First 5 Authors: Keli Huang, Botian Shi, Xiang Li, Xin Li, Siyuan Huang

| Summary:

Multi-modal fusion is a fundamental task for the perception of an autonomous
driving system, which has recently intrigued many researchers. However,
achieving a rather good performance is not an easy task due to the noisy raw
data, underutilized information, and the misalignment of multi-modal sensors.
In this paper, we provide a literature review of the existing multi-modal-based
methods for perception tasks in autonomous driving. Generally, we make a
detailed analysis including over 50 papers leveraging perception sensors
including LiDAR and camera trying to solve object detection and semantic
segmentation tasks. Different from traditional fusion methodology for
categorizing fusion models, we propose an innovative way that divides them into
two major classes, four minor classes by a more reasonable taxonomy in the view
of the fusion stage. Moreover, we dive deep into the current fusion methods,
focusing on the remaining problems and open-up discussions on the potential
research opportunities. In conclusion, what we expect to do in this paper is to
present a new taxonomy of multi-modal fusion methods for the autonomous driving
perception tasks and provoke thoughts of the fusion-based techniques in the
future.

| Search Query: ArXiv Query: search_query=au:”Li Xin Li”&id_list=&start=0&max_results=10

Read More

Leave a Reply