Kavli Affiliate: Yi Zhou
| First 5 Authors: Xiuyuan Lu, Yi Zhou, Shaojie Shen, ,
| Summary:
Neuromorphic event-based cameras are bio-inspired visual sensors with
asynchronous pixels and extremely high temporal resolution. Such favorable
properties make them an excellent choice for solving state estimation tasks
under aggressive ego motion. However, failures of camera pose tracking are
frequently witnessed in state-of-the-art event-based visual odometry systems
when the local map cannot be updated in time. One of the biggest roadblocks for
this specific field is the absence of efficient and robust methods for data
association without imposing any assumption on the environment. This problem
seems, however, unlikely to be addressed as in standard vision due to the
motion-dependent observability of event data. Therefore, we propose a
mapping-free design for event-based visual-inertial state estimation in this
paper. Instead of estimating the position of the event camera, we find that
recovering the instantaneous linear velocity is more consistent with the
differential working principle of event cameras. The proposed event-based
visual-inertial velometer leverages a continuous-time formulation that
incrementally fuses the heterogeneous measurements from a stereo event camera
and an inertial measurement unit. Experiments on the synthetic dataset
demonstrate that the proposed method can recover instantaneous linear velocity
in metric scale with low latency.
| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3