In this work we present SpliNet, a novel CNN-based method that estimates a global color transform for the enhancement of raw images. The method is designed to improve the perceived quality of the images by reproducing the ability of an expert in the field of photo editing. The transformation applied to the input image is found by a convolutional neural network specifically trained for this purpose. More precisely, the network takes as input a raw image and produces as output one set of control points for each of the three color channels. Then, the control points are interpolated with natural cubic splines and the resulting functions are globally applied to the values of the input pixels to produce the output image. Experimental results compare favorably against recent methods in the state of the art on the MIT-Adobe FiveK dataset. Furthermore, we also propose an extension of the SpliNet in which a single neural network is used to model the style of multiple reference retouchers by embedding them into a user space. The style of new users can be reproduced without retraining the network, after a quick modeling stage in which they are positioned in the user space on the basis of their preferences on a very small set of retouched images.
Bianco, S., Cusano, C., Piccoli, F., Schettini, R. (2020). Personalized Image Enhancement Using Neural Spline Color Transforms. IEEE TRANSACTIONS ON IMAGE PROCESSING, 29, 6223-6236 [10.1109/TIP.2020.2989584].
Personalized Image Enhancement Using Neural Spline Color Transforms
Bianco, Simone;Cusano, Claudio;Piccoli, Flavio;Schettini, Raimondo
2020
Abstract
In this work we present SpliNet, a novel CNN-based method that estimates a global color transform for the enhancement of raw images. The method is designed to improve the perceived quality of the images by reproducing the ability of an expert in the field of photo editing. The transformation applied to the input image is found by a convolutional neural network specifically trained for this purpose. More precisely, the network takes as input a raw image and produces as output one set of control points for each of the three color channels. Then, the control points are interpolated with natural cubic splines and the resulting functions are globally applied to the values of the input pixels to produce the output image. Experimental results compare favorably against recent methods in the state of the art on the MIT-Adobe FiveK dataset. Furthermore, we also propose an extension of the SpliNet in which a single neural network is used to model the style of multiple reference retouchers by embedding them into a user space. The style of new users can be reproduced without retraining the network, after a quick modeling stage in which they are positioned in the user space on the basis of their preferences on a very small set of retouched images.File | Dimensione | Formato | |
---|---|---|---|
Bianco-2020-IEEETIP-VoR.pdf
Solo gestori archivio
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Tutti i diritti riservati
Dimensione
4.55 MB
Formato
Adobe PDF
|
4.55 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.