Low-light video enhancement (LLVE) has received little attention compared to low-light image enhancement (LLIE) mainly due to the lack of paired low-/normal-light video datasets. Consequently, a common approach to LLVE is to enhance each video frame individually using LLIE methods. However, this practice introduces temporal inconsistencies in the resulting video. In this work, we propose a recurrent neural network (RNN) that, given a low-light video and its per-frame enhanced version, produces a temporally consistent video preserving the underlying frame-based enhancement. We achieve this by training our network with a combination of a new forward-backward temporal consistency loss and a content-preserving loss. At inference time, we can use our trained network to correct videos processed by any LLIE method. Experimental results show that our method achieves the best trade-off between temporal consistency improvement and fidelity with the per-frame enhanced video, exhibiting a lower memory complexity and comparable time complexity with respect to other state-of-the-art methods for temporal consistency.

Rota, C., Buzzelli, M., Bianco, S., Schettini, R. (2024). A RNN for Temporal Consistency in Low-Light Videos Enhanced by Single-Frame Methods. IEEE SIGNAL PROCESSING LETTERS, 31, 2795-2799 [10.1109/lsp.2024.3475969].

A RNN for Temporal Consistency in Low-Light Videos Enhanced by Single-Frame Methods

Rota, Claudio
;
Buzzelli, Marco;Bianco, Simone;Schettini, Raimondo
2024

Abstract

Low-light video enhancement (LLVE) has received little attention compared to low-light image enhancement (LLIE) mainly due to the lack of paired low-/normal-light video datasets. Consequently, a common approach to LLVE is to enhance each video frame individually using LLIE methods. However, this practice introduces temporal inconsistencies in the resulting video. In this work, we propose a recurrent neural network (RNN) that, given a low-light video and its per-frame enhanced version, produces a temporally consistent video preserving the underlying frame-based enhancement. We achieve this by training our network with a combination of a new forward-backward temporal consistency loss and a content-preserving loss. At inference time, we can use our trained network to correct videos processed by any LLIE method. Experimental results show that our method achieves the best trade-off between temporal consistency improvement and fidelity with the per-frame enhanced video, exhibiting a lower memory complexity and comparable time complexity with respect to other state-of-the-art methods for temporal consistency.
Articolo in rivista - Articolo scientifico
Low-light video enhancement; temporal consistency; video processing;
English
8-ott-2024
2024
31
2795
2799
none
Rota, C., Buzzelli, M., Bianco, S., Schettini, R. (2024). A RNN for Temporal Consistency in Low-Light Videos Enhanced by Single-Frame Methods. IEEE SIGNAL PROCESSING LETTERS, 31, 2795-2799 [10.1109/lsp.2024.3475969].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/524519
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact