Kavli Affiliate: Yi Zhou
| First 5 Authors: Kaizhen Sun, Jinghang Li, Kuan Dai, Bangyan Liao, Wei Xiong
| Summary:
Time-to-Collision (TTC) estimation lies in the core of the forward collision
warning (FCW) functionality, which is key to all Automatic Emergency Braking
(AEB) systems. Although the success of solutions using frame-based cameras
(e.g., Mobileye’s solutions) has been witnessed in normal situations, some
extreme cases, such as the sudden variation in the relative speed of leading
vehicles and the sudden appearance of pedestrians, still pose significant risks
that cannot be handled. This is due to the inherent imaging principles of
frame-based cameras, where the time interval between adjacent exposures
introduces considerable system latency to AEB. Event cameras, as a novel
bio-inspired sensor, offer ultra-high temporal resolution and can
asynchronously report brightness changes at the microsecond level. To explore
the potential of event cameras in the above-mentioned challenging cases, we
propose EvTTC, which is, to the best of our knowledge, the first multi-sensor
dataset focusing on TTC tasks under high-relative-speed scenarios. EvTTC
consists of data collected using standard cameras and event cameras, covering
various potential collision scenarios in daily driving and involving multiple
collision objects. Additionally, LiDAR and GNSS/INS measurements are provided
for the calculation of ground-truth TTC. Considering the high cost of testing
TTC algorithms on full-scale mobile platforms, we also provide a small-scale
TTC testbed for experimental validation and data augmentation. All the data and
the design of the testbed are open sourced, and they can serve as a benchmark
that will facilitate the development of vision-based TTC techniques.
| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3