We present an efficient combination strategy for color constancy algorithms. We define a compact neural network architecture to process and combine the illuminant estimations of individual algorithms, that may be based on different assumptions over the input scene content. Our solution can be specialized to the image domain, thus expecting a single frame input, and to the video domain, exploiting a Long Short-Term Memory module (LSTM) to handle varying-length sequences. To prove the effectiveness of our combining method we limit ourselves to combine only learning-free color constancy algorithms based on simple image statistics. We experiment on the standard Shi-Gehler and NUS datasets for still images, and on the recent Burst Color Constancy dataset for videos. Experimental results show that our method outperforms other combination strategies, and reaches an illuminant estimation accuracy comparable to more sophisticated and computationally-demanding solutions when the standard dataset split is used. Furthermore, our solution is proven to be effective even when the number of training instances available is reduced. As a further analysis, we assess the individual contribution of each underlying method towards the final illuminant estimation. IEEE
Zini, S., Buzzelli, M., Bianco, S., Schettini, R. (2022). COCOA: Combining Color Constancy Algorithms for Images and Videos. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 8, 795-807 [10.1109/TCI.2022.3203889].
COCOA: Combining Color Constancy Algorithms for Images and Videos
Zini, S;Buzzelli, M;Bianco, S;Schettini, R
2022
Abstract
We present an efficient combination strategy for color constancy algorithms. We define a compact neural network architecture to process and combine the illuminant estimations of individual algorithms, that may be based on different assumptions over the input scene content. Our solution can be specialized to the image domain, thus expecting a single frame input, and to the video domain, exploiting a Long Short-Term Memory module (LSTM) to handle varying-length sequences. To prove the effectiveness of our combining method we limit ourselves to combine only learning-free color constancy algorithms based on simple image statistics. We experiment on the standard Shi-Gehler and NUS datasets for still images, and on the recent Burst Color Constancy dataset for videos. Experimental results show that our method outperforms other combination strategies, and reaches an illuminant estimation accuracy comparable to more sophisticated and computationally-demanding solutions when the standard dataset split is used. Furthermore, our solution is proven to be effective even when the number of training instances available is reduced. As a further analysis, we assess the individual contribution of each underlying method towards the final illuminant estimation. IEEEFile | Dimensione | Formato | |
---|---|---|---|
Zini-2022-IEEE Trans Computat Imag-VoR.pdf
Solo gestori archivio
Descrizione: Intervento a convegno
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Tutti i diritti riservati
Dimensione
5.4 MB
Formato
Adobe PDF
|
5.4 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.