Generative models have recently gained a renewed interest due to their success in the development of new real-life applications, such as artificial intelligence generated images, texts, audios. The most recent and successful approaches combine neural network learning and Optimal Transport theory, exploiting the so-called transportation map/plan to generate a new element of a domain starting from an element of a different one, while preserving statistical properties of the data generation processes of the two domains. Although effective, the Neural Optimal Transport (NOT) approach is largely computationally expensive – due to the training of two nested deep neural networks – and requires injecting additional noise to improve generative properties. In this paper we present an alternative method, based on Gaussian Process (GP) regression, which overcomes these limitations. Contrary to a neural model, a GP is probabilistic, meaning that, for a given input, it provides both a prediction and the associated uncertainty. Thus, the generative properties are, by design, guaranteed by sampling the generated element around the prediction and depending on the uncertainty. Results on both toy examples and a dataset of images are provided to empirically demonstrate the benefits of the proposed approach.
Candelieri, A., Ponti, A., Archetti, F. (2023). Generative Models via Optimal Transport and Gaussian Processes. In M. Sellmann, K. Tierney (a cura di), Learning and Intelligent Optimization 17th International Conference, LION 17, Nice, France, June 4–8, 2023, Revised Selected Papers (pp. 135-149). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-44505-7_10].
Generative Models via Optimal Transport and Gaussian Processes
Candelieri, A
;Ponti, A;Archetti, F
2023
Abstract
Generative models have recently gained a renewed interest due to their success in the development of new real-life applications, such as artificial intelligence generated images, texts, audios. The most recent and successful approaches combine neural network learning and Optimal Transport theory, exploiting the so-called transportation map/plan to generate a new element of a domain starting from an element of a different one, while preserving statistical properties of the data generation processes of the two domains. Although effective, the Neural Optimal Transport (NOT) approach is largely computationally expensive – due to the training of two nested deep neural networks – and requires injecting additional noise to improve generative properties. In this paper we present an alternative method, based on Gaussian Process (GP) regression, which overcomes these limitations. Contrary to a neural model, a GP is probabilistic, meaning that, for a given input, it provides both a prediction and the associated uncertainty. Thus, the generative properties are, by design, guaranteed by sampling the generated element around the prediction and depending on the uncertainty. Results on both toy examples and a dataset of images are provided to empirically demonstrate the benefits of the proposed approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.