When it comes to visual perception, there are notable differences between the ways in which humans and machines interpret and understand images. Unlike image acquisition systems, the human eye can perceive object colors accurately regardless of the light source's color cast. To achieve a similar effect in digital images, a pre-processing step called Computational Color Constancy is used. Its purpose is to render images as if they were captured under a known light source. This problem is also important for those computer vision applications that rely on the coherence of objects' colors. Unfortunately, a unique solution to this problem is unattainable. However, the scientific community has made significant efforts in developing both generalized and environment-specific solutions. Over the past few years, the cost of spectral sensors has decreased, making them more accessible. So much so, that the first patents for the introduction of low-resolution spectral sensors in smartphone digital cameras have been published. The acquisition of spectral images is still influenced by the light source, and when the acquisition takes place in an uncontrolled environment, the availability of a reliable algorithm for computational color constancy becomes even more important. Therefore, the focus of the scientific community partly shifted towards the estimation of a spectral illuminant, in order to provide a solution for the acquisition of spectral images even in uncontrolled environments. One of the main purposes of this work is to improve the accuracy of color illuminant estimates by utilizing spectral information. To achieve this, two effective strategies have been proposed. The first approach involves employing established statistical-based algorithms to estimate illuminants. Subsequently, four innovative re-elaboration methods have been introduced. They utilize these spectral estimations as input and generate an improved version of the estimations in the color domain. On the other hand, the second strategy involves utilizing both spectral and color information to enhance color illuminant estimation. The problem of Computation Color Constancy is complex and has many different aspects and applications. However, one area that has not received much attention from the scientific community is the temporal domain. While computational color constancy aims to accurately render images taken under known light sources, temporal color constancy adds an additional challenge of ensuring that the color of objects remains consistent across all frames. Nonetheless, it also provides a temporal sequence of frames that contain valuable information that can be utilized to improve the illumination estimation. One solution that has been adopted so far is to apply computational color constancy algorithms to each frame individually. However, this approach can lead to the creation of artifacts in which the color of an object changes from frame to frame. To avoid these kinds of issues, it is necessary to define a metric that can identify them. This thesis not only provides such a metric but also analyzes the temporal stability of some of the computational color constancy algorithms when applied in a single-frame manner. Additionally, this thesis also provides a temporal color constancy method. These methods usually estimate the illuminant of a selected frame, also called shot-frame, by exploiting information extracted from previous frames. In this work, this assumption is extended. The method provided takes a window of frames to return an illuminant estimation for each frame. The assumption underlying the method is that not only the previous frames can be beneficial to the estimation of the illuminant of the shot frame, but the assumption is generalized to the adjacent frames.
When it comes to visual perception, there are notable differences between the ways in which humans and machines interpret and understand images. Unlike image acquisition systems, the human eye can perceive object colors accurately regardless of the light source's color cast. To achieve a similar effect in digital images, a pre-processing step called Computational Color Constancy is used. Its purpose is to render images as if they were captured under a known light source. This problem is also important for those computer vision applications that rely on the coherence of objects' colors. Unfortunately, a unique solution to this problem is unattainable. However, the scientific community has made significant efforts in developing both generalized and environment-specific solutions. Over the past few years, the cost of spectral sensors has decreased, making them more accessible. So much so, that the first patents for the introduction of low-resolution spectral sensors in smartphone digital cameras have been published. The acquisition of spectral images is still influenced by the light source, and when the acquisition takes place in an uncontrolled environment, the availability of a reliable algorithm for computational color constancy becomes even more important. Therefore, the focus of the scientific community partly shifted towards the estimation of a spectral illuminant, in order to provide a solution for the acquisition of spectral images even in uncontrolled environments. One of the main purposes of this work is to improve the accuracy of color illuminant estimates by utilizing spectral information. To achieve this, two effective strategies have been proposed. The first approach involves employing established statistical-based algorithms to estimate illuminants. Subsequently, four innovative re-elaboration methods have been introduced. They utilize these spectral estimations as input and generate an improved version of the estimations in the color domain. On the other hand, the second strategy involves utilizing both spectral and color information to enhance color illuminant estimation. The problem of Computation Color Constancy is complex and has many different aspects and applications. However, one area that has not received much attention from the scientific community is the temporal domain. While computational color constancy aims to accurately render images taken under known light sources, temporal color constancy adds an additional challenge of ensuring that the color of objects remains consistent across all frames. Nonetheless, it also provides a temporal sequence of frames that contain valuable information that can be utilized to improve the illumination estimation. One solution that has been adopted so far is to apply computational color constancy algorithms to each frame individually. However, this approach can lead to the creation of artifacts in which the color of an object changes from frame to frame. To avoid these kinds of issues, it is necessary to define a metric that can identify them. This thesis not only provides such a metric but also analyzes the temporal stability of some of the computational color constancy algorithms when applied in a single-frame manner. Additionally, this thesis also provides a temporal color constancy method. These methods usually estimate the illuminant of a selected frame, also called shot-frame, by exploiting information extracted from previous frames. In this work, this assumption is extended. The method provided takes a window of frames to return an illuminant estimation for each frame. The assumption underlying the method is that not only the previous frames can be beneficial to the estimation of the illuminant of the shot frame, but the assumption is generalized to the adjacent frames.
(2024). COMPUTATIONAL COLOR CONSTANCY BEYOND RGB IMAGES: MULTISPECTRAL AND TEMPORAL EXTENSIONS. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2024).
COMPUTATIONAL COLOR CONSTANCY BEYOND RGB IMAGES: MULTISPECTRAL AND TEMPORAL EXTENSIONS
ERBA, ILARIA
2024
Abstract
When it comes to visual perception, there are notable differences between the ways in which humans and machines interpret and understand images. Unlike image acquisition systems, the human eye can perceive object colors accurately regardless of the light source's color cast. To achieve a similar effect in digital images, a pre-processing step called Computational Color Constancy is used. Its purpose is to render images as if they were captured under a known light source. This problem is also important for those computer vision applications that rely on the coherence of objects' colors. Unfortunately, a unique solution to this problem is unattainable. However, the scientific community has made significant efforts in developing both generalized and environment-specific solutions. Over the past few years, the cost of spectral sensors has decreased, making them more accessible. So much so, that the first patents for the introduction of low-resolution spectral sensors in smartphone digital cameras have been published. The acquisition of spectral images is still influenced by the light source, and when the acquisition takes place in an uncontrolled environment, the availability of a reliable algorithm for computational color constancy becomes even more important. Therefore, the focus of the scientific community partly shifted towards the estimation of a spectral illuminant, in order to provide a solution for the acquisition of spectral images even in uncontrolled environments. One of the main purposes of this work is to improve the accuracy of color illuminant estimates by utilizing spectral information. To achieve this, two effective strategies have been proposed. The first approach involves employing established statistical-based algorithms to estimate illuminants. Subsequently, four innovative re-elaboration methods have been introduced. They utilize these spectral estimations as input and generate an improved version of the estimations in the color domain. On the other hand, the second strategy involves utilizing both spectral and color information to enhance color illuminant estimation. The problem of Computation Color Constancy is complex and has many different aspects and applications. However, one area that has not received much attention from the scientific community is the temporal domain. While computational color constancy aims to accurately render images taken under known light sources, temporal color constancy adds an additional challenge of ensuring that the color of objects remains consistent across all frames. Nonetheless, it also provides a temporal sequence of frames that contain valuable information that can be utilized to improve the illumination estimation. One solution that has been adopted so far is to apply computational color constancy algorithms to each frame individually. However, this approach can lead to the creation of artifacts in which the color of an object changes from frame to frame. To avoid these kinds of issues, it is necessary to define a metric that can identify them. This thesis not only provides such a metric but also analyzes the temporal stability of some of the computational color constancy algorithms when applied in a single-frame manner. Additionally, this thesis also provides a temporal color constancy method. These methods usually estimate the illuminant of a selected frame, also called shot-frame, by exploiting information extracted from previous frames. In this work, this assumption is extended. The method provided takes a window of frames to return an illuminant estimation for each frame. The assumption underlying the method is that not only the previous frames can be beneficial to the estimation of the illuminant of the shot frame, but the assumption is generalized to the adjacent frames.File | Dimensione | Formato | |
---|---|---|---|
phd_unimib_795774.pdf
accesso aperto
Descrizione: COMPUTATIONAL COLOR CONSTANCY BEYOND RGB IMAGES: MULTISPECTRAL AND TEMPORAL EXTENSIONS
Tipologia di allegato:
Doctoral thesis
Dimensione
29.58 MB
Formato
Adobe PDF
|
29.58 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.