Kavli Affiliate: Alyosha Molnar
| First 5 Authors: Mayank Gupta, Arjun Jauhari, Kuldeep Kulkarni, Suren Jayasuriya, Alyosha Molnar
| Summary:
Light field imaging is limited in its computational processing demands of
high sampling for both spatial and angular dimensions. Single-shot light field
cameras sacrifice spatial resolution to sample angular viewpoints, typically by
multiplexing incoming rays onto a 2D sensor array. While this resolution can be
recovered using compressive sensing, these iterative solutions are slow in
processing a light field. We present a deep learning approach using a new, two
branch network architecture, consisting jointly of an autoencoder and a 4D CNN,
to recover a high resolution 4D light field from a single coded 2D image. This
network decreases reconstruction time significantly while achieving average
PSNR values of 26-32 dB on a variety of light fields. In particular,
reconstruction time is decreased from 35 minutes to 6.7 minutes as compared to
the dictionary method for equivalent visual quality. These reconstructions are
performed at small sampling/compression ratios as low as 8%, allowing for
cheaper coded light field cameras. We test our network reconstructions on
synthetic light fields, simulated coded measurements of real light fields
captured from a Lytro Illum camera, and real coded images from a custom CMOS
diffractive light field camera. The combination of compressive light field
capture with deep learning allows the potential for real-time light field video
acquisition systems in the future.
| Search Query: ArXiv Query: search_query=au:”Alyosha Molnar”&id_list=&start=0&max_results=3