Skip to main content
  1. Education/

VIV

The representation of (captured) images and video has remained unchanged since it’s infancy: Images are represented as pixels per line and lines per image, videos simply are a series of images. But we’ve entered an era of severe changes in capturing content: Multicamera-arrays even in mobile devices, depth sensing via Time of Flight or Gated Imaging, volumetric capture via LIDAR and new capture paradigms like event-cameras show, that computational imaging significantly differs from classical film-based capture and hence new forms of representing images and videos are needed. The (pro-)seminar will review a palette of approaches for representing volumetric content. From multiview video plus depth (MVD) over point clouds and voxels we will introduce and discuss neural representations like neural radiance fields and neural surfaces as well as volumetric representations that consider the capture setup (Froxels) and we’ll compare them to representations like point clouds or voxels. You can find the corresponding Moodle course here.