Download PDF

Extraction of Surface Characteristics and Lighting in 3D Reconstruction from Uncalibrated Images

Publication date: 2017-08-29

Author:

Georgoulis, Stamatios

Keywords:

PSI_VISICS

Abstract:

In this thesis, we try to reverse the image formation process, enabling computers to factor images into their intrinsic components, i.e. 3D shape, surface reflectance, environmental illumination. On the one hand, traditional approaches have relied on simplistic assumptions like parametric Bidirectional Reflectance Distribution Function (BRDF) models for reflectance and remote point light sources for illumination. On the other hand, recent approaches with less strict reflectance and illumination assumptions still require the capture of High-Dynamic Range (HDR) images and the use of dedicated hardware setups. Instead, we focus on readily available capturing devices recording Low-Dynamic Range (LDR) images and we output refined geometry, non-parametric BRDFs and environmental illumination maps. Due to the highly under-constrained nature of the inverse rendering problem we proceed in discrete steps. First, we propose the use of a flash-equipped DSLR camera or smartphone together with a method that combines the principles of Structure-from-Motion (SfM), Multi-view Stereo (MvS) and Photometric Stereo (PS) in order to capture an object's 3D shape and surface reflectance characteristics at the same time. In particular, starting from a small sequence of LDR images depicting the object under the illumination of the flash, a low-resolution mesh is generated using SfM and MvS, and we consequently apply a new PS-based optimization technique to refine both geometry and reflectance, where the latter is expressed in terms of low-dimensional non-parametric BRDFs, the BRDF slices. Second, starting from the minimal input of a single BRDF slice generated using the camera/flash setup, which holds information about less than 0.001% of the whole BRDF domain, we predict the missing part of the BRDF. We propose a Gaussian Process Latent Variable Model (GPLVM) to infer the higher dimensional properties of the material's BRDF, based on the statistical distribution of known material characteristics observed in real-life BRDF samples. Third, we investigate the recovery of natural HDR illumination from a single LDR image. We propose a deep Convolutional Neural Network (CNN) that combines reflectance and illumination priors with an input that explicitly uses two key observations: (i) images rarely show a single material, but rather multiple ones all reflecting the same illumination, (ii) parts of the illumination are often directly observed in the background, without being affected by reflection. Finally, we tackle the problem as a whole and show how to estimate reflectance and HDR illumination from a single LDR image following a data-driven, learning-based approach where we do not assume one or more components (shape, reflectance or illumination) to be known or simplified. To achieve this, we propose a two-step deep learning approach, where we first estimate the object's Reflectance Map (RM) from the input LDR image, and then further decompose the RM into reflectance and HDR illumination. The proposed methods are validated on the basis of a large set of synthetic and real data, including a comparison to the state-of-the-art.