Gaining insights into the information distribution of Light Fields and enabling adaptive Light Field processing
Abstract:
Thanks to smartphones with several cameras, capturing a scene from multiple view points has become increasingly more available. Together with the evolving computing capabilities of modern hardware, light field processing has gained a lot of attention in the last years [Br20; Fl19; Mi20]. These techniques rely on neural networks to generate representations of the light field data. Other work assumes certain scene properties to enable light field processing (like lambertian radiation). The work shown here uses depth maps to transform the light field into a froxel (frustum + voxel)[Ev15] centered representation enabling unique post processing steps and analysis of the ray distribution in a scene. But more importantly it paves the way to quantify the information distribution within a scene. Based on this information appropriate adaptive filtering techniques can be applied. Th