This seminar is offered jointly with the X-Rite Graduate School for Digital Material Appearance
. On an irregular basis, we welcome experts from all over the world and invite them to share their vision .
- October 25, 2018, 9:00am; Endenicher Allee 19A, Room 3.035b: Femtosecond videography using coded illumination
Speaker: Dr. Elias Kristensson, Department of Combustion Physics at Lund University, Sweden
Abstract: High-speed cameras are becoming standard laboratory equipment for filming in slow motion, allowing researchers to film fast phenomena at up to a few 100.000 frames per second. Although this is very impressive, some events in nature occur on a much shorter time scale. In this talk I will describe a new video method that makes it possible to film at up to 5.000.000.000.000 frames per second – fast enough to film light moving in space.
- October 29, 2018, 11:00am; Endenicher Allee 19A, Room 3.035b: Intelligence for efficient image-based view synthesis
Speaker: Thomas Leimkühler, Max Planck Institute for Informatics, Germany
Abstract: Synthesizing novel views from image data is a widely investigated topic in both computer graphics and computer vision, and has many applications
like stereo or multi-view rendering for VR, light field reconstruction, and image post-processing. While image-based approaches have the advantage of reduced computational load compared to classical model-based rendering, efficiency is still a major concern.
In my talk I will give an overview of my efforts to utilize concepts and tools from artificial intelligence to increase the efficiency of image-based view synthesis algorithms. In particular, I will talk about how probabilistic inference can be used to perform 2D-to-3D conversion, how path planning can guide image warping, how optimization leads to significant speedups in interactive distribution effect rendering, and how machine learning can help to generate sample patterns specifically tailored to computer graphics tasks.
- July 13, 2018, 9:30am; Endenicher Allee 19A, Room 3.035b: Intrinsic Light Field Decomposition
Speaker: Sumit Shekhar, Max Planck Institute for Informatics, Germany
Abstract: Light-field imaging has various advantages over the traditional 2D photography, such as depth estimation, occlusion detection, which can aid intrinsic decomposition. The extracted intrinsic layers enable multiple applications, such as light-field appearance editing. However, the current light-field intrinsic decomposition techniques primarily resort to qualitative comparisons, due to lack of ground-truth data. In this talk I will discuss two projects addressing the above aspects.
In the first part I will talk about “Light-Field Appearance Editing Based on Intrinsic Decomposition”, where we come up with a framework for image-based surface appearance editing for light-field data. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like “translucency”.
In the second part I will talk about “Light-Field Intrinsic Dataset” acquisition where we capture intrinsic dataset for real-world and synthetic 4D and 3D (only horizontal parallax) light-fields. The ground-truth intrinsic data comprises albedo, shading and specularity layers for all sub-aperture images. To the best of our knowledge this is the first such dataset for light fields, which can also be used for single image, multi-view stereo and video.
- March 10, 2017, 10:00, LBH I.80: From facial motion to meaning and back again
Speaker: Prof. Dr. Douglas Cunningham, BTU Cottbus, Germany
Abstract: The ability to rapidly and easily communicate a bewildering range of information to other people is generally thought be one of the core abilities of humans. It should not be surprising, then, that many different disciplines have studied the structure and function of human language, and even tried to grant machines similar abilities. The vast majority of this work has focused on the specific words used in a communication. That is, they are interested in a message’s semantic meaning in the strictest sense. It is well know to anyone who has written a letter, however, that face-to-face communication allows so much more than a more collection of words can. Thus, scientists are increasingly studying the so-called “non-verbal” elements of communication. In this talk we will look at research into understanding and synthesizing facial expressions. In a first step, advanced mathematical modeling techniques will be applied to established psychological methods to recover a metric space within which the full meaning in all its subtleties of facial expressions are represented. Next, a mapping will be recovered between the spatiotemporal elements of actual facial expressions. This mapping will be used to generate novel facial expressions with the desired expressive tone. The talk will conclude by discussing how this technique can be applied to just about any type of stimulus, including the visual quality of surface properties.
- November 9, 2015, 16:15, LBH III.03 (HS3): Capturing and simulating the interaction of light with the world around us
Speaker: Dr. Wenzel Jakob, ETH Zürich, Switzerland
Abstract: Driven by the increasing demand for photorealistic computer-generated images, graphics is currently undergoing a substantial transformation to physics-based approaches which carefully process visual information characterizing both object shape and the interaction of light and matter. Progress on all fronts of this transition — acquisition, physical models and simulation techniques — has been steady but mostly independent from another. When combined, the resulting methods are in many cases impracticably slow and require unrealistic workarounds to process even simple everyday input. My research lies at the interface of these research fields; my goal is to break down the barriers between acquisition, simulation techniques and the underlying physical models, and to use the resulting insights to develop realistic methods that remain efficient over a wide range of inputs.
I will cover three areas of recent work: the first involves volumetric modeling approaches to create realistic images of woven and knitted cloth. Next, I will discuss reflectance models for glitter/sparkle effects and arbitrarily layered materials that are specially designed to allow for efficient simulations. In the last part of the talk, I will give an overview of Manifold Exploration, a Markov Chain Monte Carlo technique that is able to reason about the geometric structure of light paths in high dimensional configuration spaces defined by the underlying physical models, and which uses this information to accelerate computation of rendered images and animation sequences. The talk will conclude with a discussion of future challenges in the areas of appearance modeling and light transport simulation.
- March 25, 2015, 10:00, LBH I.80: Appearance and 3D Reconstruction
Speaker: Prof. Dr. Hendrik P. A. Lensch, University of Tübingen, Germany
Abstract: In this presentation I will give an overview on two research activities of the computer graphics group at the University of Tübingen. The first part will cover bispectral appearance acquisition, demonstrating different measurement setups including a hyperspectral light stage for capturing the effect of fluorescence where shorter wavelength illumination is re-emitted as longer wavelength. Reconstruction from non-exhaustive sampling is enabled by compressive sensing. The second part will cover real-time structure from motion for high spatial and temporal resolution video streams with thousands of frames. The key technical contributions are a robust selection of confident frames, a novel windowed bundle adjustment, robust frame-to-structure verification for globally consistent reconstructions with multi-loop closing, and the utilization of an efficient global linear camera pose estimation that links both consecutive and distant bundle adjustment windows.
- January 28, 2015, 10:00, LBH I.80: Adaptive Acquisition of Anisotropic Appearance
Speaker: Dr. Jiri Filip, Institute of Information Theory and Automation, Prague, Czech Republic
Abstract: Many real-world materials exhibit significant changes in appearance when rotated along a surface normal. Unfortunately, the reproduction of this behavior, often referred to as visual anisotropy, is for the sake of feasibility omitted in a number of material appearance measurement and modelling approaches. Contrary to analysis of isotropic materials, where locations of specular highlights can be predicted, the analysis of anisotropic ones is more challenging. The number and location of anisotropic highlights in angular space is unknown and thus depend entirely on the initial orientation of the measured material and on its optical properties. While recording of anisotropic appearance using BRDF or BTF is crucial for realistic representation of materials, the related time- and resources-demanding measurement process remains one of the main CG challenges. The talk will describe our approaches to identification of material’s anisotropy and its measurement using material-dependent adaptive sampling strategies. I will show that such approaches allow a more accurate anisotropic appearance measurement in the same amount of acquisition time, and consequently development of more effective measurement setups.
- January 8, 2015, 13:00: Lab Excursion to X-Rite Bonn (closed to the general public)
- December 8, 2014, 16:15, LBH HS III.03a: Problem-Aware Digitisation of Cultural-Heritage Artefacts
Speaker: Prof. Dr. Tim Weyrich, University College London, UK
Abstract: Through the increasing availability of high-quality consumer hardware for advanced imaging tasks, digital imaging and scanning are gradually pervading general practice in cultural heritage preservation and archaeology. In most cases, however, imaging and scanning are predominantly means of documentation and archival, and digital processing ends with the creation of a digital image or 3D model. At the example of two projects, the speaker will demonstrate how careful analysis of the underlying cultural-heritage questions allows for bespoke solutions that–through joint development of imaging procedures, data analysis and visualisations–directly support conservators and humanities researchers in their work. Tim Weyrich will report on his experiences with fresco reconstruction at the Akrotiri Excavation, Santorini, and on the reconstruction of fire-damaged parchment with London Metropolitan Archives.
- November 3, 2014, 16:15, LBH HS III.03a: Statistical Appearance Models in the Perception of Materials
Speaker: Prof. Dr. Roland Fleming, University of Gießen, Germany
Abstract: Internal models that explicitly represent the behaviour of a complex system are thought to be important in many areas of cognition and behaviour, such as generative grammars in language production and comprehension, and forwards models of limb movements in action execution. In this presentation, I will argue that when we visually perceive the properties of materials and objects, we develop internal models that capture the ‘typical appearance’ of the material, and the way that appearance changes systematically across viewing conditions. I’ll discuss gloss perception and the inference of fluid viscosity from shape cues. Using these examples I’ll argue that the visual system doesn’t actually estimate physical parameters of materials and objects. Instead, I suggest, the brain is remarkably adept at building ‘statistical generative models’ that capture the natural degrees of variation in appearance between samples. For example, when determining perceived glossiness, the brain doesn’t estimate parameters of the BRDF. Instead, it uses a constellation of low- and mid-level image measurements to characterize the extent to which the surface manifests specular reflections. Likewise, when determining apparent viscosity, the brain uses many general-purpose shape and motion measurements to characterize the behaviour of a material and relate it to other samples it has seen before. I’ll argue that these ‘statistical generative models’ are both more expressive and easier to compute than physical parameters, and therefore represent a powerful middle way between a ‘bag of tricks’ and ‘inverse optics’. In turn, this leads to some intriguing future directions about how ‘generative’ representations of shape could be used for inferring not only material properties but also causal history and class membership from few exemplars. I will also try to make connections to Computer Graphics along the way.
- September 25, 2014, 13:00, LBH I.80: Internal workshop (closed to the general public)
Speakers: Prof. Dr. Matthias Hullin, Dr. Marc Ellens, Dr. Gero Müller, Dennis den Brok, Julian Iseringhausen, Rodrigo Martín, David Seca, Heinz Christian Steinhausen, Michael Weinmann.
- July 14, 2014, 16:00, LBH I.80: Internal workshop (closed to the general public)
Speakers: Prof. Dr. Matthias Hullin, Dr. Gero Müller.
- July 9, 2014, 10:30, LBH I.80: Perception Matters
Speaker: Dr. Philipp Urban, Fraunhofer IGD Darmstadt, Germany
Abstract: Two major classes of problems in perception-based image processing have been under active investigation for decades. The first class covers ill-posed problems of restoring distorted images (e.g. by noise or blur). The second class of problems aims to actively distort images in order to meet constraints such as memory consumption, dynamic range or color limits. All problems share the objective to minimize perceived errors. For designing algorithms to reach this goal, a metric is required reflecting the perceived distance between images.
In this talk, I will describe our effort to develop such a metric particularly for color images and to employ it for optimal gamut mapping, i.e. distorting an image so that it fits into the color limits of a reproduction device (e.g. display or printer) with minimal perceptual error.
At the end of the talk, I will show how concepts based on perceptual models can be used to solve problems in spectral printing aiming an illuminant-invariant match between original and reproduction. I will also show some of these prints which cannot be reproduced by commercial copiers.