Machine Learning Applications in Appearance Modelling - PhDData

Access database of worldwide thesis




Machine Learning Applications in Appearance Modelling

The thesis was published by Sztrajman, Alejandro, in July 2022, UCL (University College London).

Abstract:

In this thesis, we address multiple applications of machine learning in
appearance modelling. We do so by leveraging data-driven approaches,
guided through the use of image-based error metrics, to generate new representations of material appearance and scene illumination. We first address the interchange of material appearance between different analytic representations, through an image-based optimisation of BRDF model parameters. We analyse the method in terms of stability with respect to variations of the BRDF parameters, and show that it can be used for material interchange between different renderers and workflows, without the need to access shader implementations. We extend our method to enable the remapping of spatially-varying materials, by presenting two regression schemes that allow us to learn the transformation of parameters between models and apply it to texture maps at fast rates.
Next, we centre on the efficient representation and rendering of measured material appearance. We develop a neural-based BRDF representation that provides high-quality reconstruction with low storage and competitive evaluation times, comparable with analytic models. Our method compares favourably against other representations in terms of reconstruction accuracy, and we show that it can be also used to encode anisotropic materials. In addition, we generate a unified encoding of real-world materials via a meta-learning autoencoder architecture guided by a differential rendering loss. This enables the generation of new realistic materials by interpolation of embeddings, and the fast estimation of material properties. We show that this can be leveraged for efficient rendering through importance sampling, by predicting the parameters of an invertible analytic BRDF model.
Finally, we design a hybrid representation for high-dynamic-range illumination that combines a convolutional autoencoder-based encoding for low-intensity light, and a parametric model for high intensity. Our model provides a flexible compact encoding for environment maps, while also preserving an accurate reconstruction of the high-intensity component, appropriate for rendering purposes. We utilise our light encodings in a second convolutional neural network trained for light prediction from single outdoor face portrait at interactive rates, with potential applications for real-time light prediction and 3D object insertion.



Read the last PhD tips