About the course
Although the digital photography industry is expanding rapidly, most digital cameras still look and feel like film cameras, and they offer roughly the same set of features and controls. However, as sensors and in-camera processing systems improve, cameras and mobile devices are beginning to offer capabilities that film cameras never had. Among these are the ability to refocus photographs after they are taken (see the example above), or to combine views taken with different camera settings, aim, or placement. Equally exciting are new technologies for creating efficient, controllable illumination. Future “flashbulbs” may be pulsed LEDs or video projectors, with the ability to selectively illuminate objects, recolor the scene, or extract shape information. These developments force us to relax our notion of what constitutes “a photograph.” They also blur the distinction between photography and scene modeling. These changes will lead to new photographic techniques, new scientific tools, and possibly new art forms.
In this course, we survey the converging technologies of digital photography, computational imaging, and image-based rendering, and we will explore the new imaging modalities that they enable.
Lecturers
-
- Prof. Dr. Matthias Hullin
- M.Sc. Clara Callenberg
- M.Sc. Sebastian Werner
- M.Sc. Javier Grau
Requirements
This is an advanced course for students with background in computer graphics or computer vision. The content is reflecting our conviction that successful researchers in this area must understand both the algorithms and the underlying technologies. The lectures may be accompanied by readings from textbooks or the research literature. These readings will be handed out in class or placed on the course web site. Students are expected to:
- attend the lectures, and participate in class discussions
- complete the practical assignments (including a course project to be prepared and presented in teams).
An oral exam will conclude the course.
Winter 2019/20
This year, the Computational Photography course will not be held during the semester but in a block of two and a half weeks during the spring break, precisely
from February 21 to March 11, full time. We will offer an info and sign-up event on Tuesday, February 11, 10am in INF 3.035b. We strongly recommend interested students to subscribe to the
Computational Photography mailing list as soon as possible to receive updates.
Tentative Schedule
Unless otherwise noted, all lectures take place at
9:15am, in Room INF 3.035b.
- Fri, Feb 21 – Intro (Hullin) – Lecture starts 9:00am sharp!
- Tue, Feb 25 – Sensors (Callenberg)
- Wed, Feb 26 – Optics (Hullin)
- Thu, Feb 27 – Panorama fusion (Hullin)
- Fri, Feb 28 – Inverse problems (Hullin)
- Mon, Mar 2 – Nonlinear filtering (Hullin)
- Tue, Mar 3 – Compressed sensing (NN)
- Wed, Mar 4 – Light fields (NN)
- Thu, Mar 5 – Reflectance fields (Hullin)
- Fri, Mar 6 – Time-of-flight imaging (NN)
- Mon, Mar 9 – Computational Illumination (Hullin)
- Tue, Mar 10 – Computational Display (Hullin)
- Wed, Mar 11 – Current Topics (Hullin)
Lecture Slides
Exercise Sheets
Lab
Additional Files