In recent years, the demand for high-quality video content has considerably increased. As video quality is often compromised by various factors, including compression, noise, and adverse environmental conditions, the need for video restoration methods has become crucial. These methods aim to restore the quality of degraded videos and address a range of tasks, including denoising, super-resolution, deflickering, and compression artifact reduction. Although significant progress has been made, several challenges remain unsolved, and addressing them is essential to ensure robustness, efficiency, and ubiquitous applicability of video restoration methods. The objective of this thesis is to investigate key challenges in video restoration and propose novel methods to address them. The direct application of single-image methods to video is initially studied. It enables the use of well-established image processing methods for video-related tasks, but it may introduce flickering artifacts, requiring the use of post-processing methods for video deflickering to remove them. In this context, reducing flickering artifacts while preserving the appearance of the per-frame processed video is one of the main challenges. In this thesis, a recurrent neural network with frame alignment mechanisms and trained with multiple loss functions is proposed to balance the trade-off between these two contrasting goals. Single-image methods can also be adapted to the video domain by introducing mechanisms to capture the temporal relationships among frames. This topic is here investigated in the context of video super-resolution (VSR), which specifically benefits from exploiting these inter-frame relationships. In VSR, generating realistic and temporally-consistent details is a challenging task, but it is also essential to enhance the visual quality of the upscaled frames. In this thesis, the generative capability of Diffusion Models (DMs) is exploited to synthesize realistic details, and various strategies are implemented to ensure these details are consistently generated over time. The high complexity of DMs enables high restoration performance but makes them unsuitable for scenarios where efficient processing is required, such as video streaming. For this reason, alternative solutions featuring reduced complexity and fast processing are also explored. In these scenarios, videos are commonly compressed to speed up data transmission, with a consequence introduction of visual artifacts that must be efficiently removed. However, increasing efficiency without compromising effectiveness is a significant challenge. In this thesis, a framework that combines traditional image processing techniques with convolutional neural networks is proposed to remove compression artifacts and restore frame details with a reduced computational cost. As existing methods typically focus on the restoration of a single artifact at a time, the task of multi-distorted video restoration is finally considered. Indeed, real-world videos may be simultaneously affected by multiple artifact types. Restoring these multi-distorted videos with a single model is a challenging task, as it requires dynamically adapting the restoration process based on the considered artifact types and their respective intensities. In this thesis, a method that automatically estimates the intensity of the artifacts and progressively removes them in distinct stages is proposed to achieve unified video restoration.

Negli ultimi anni, la domanda di contenuti video di alta qualità è aumentata considerevolmente. Poiché la qualità dei video è spesso compromessa da diversi fattori, tra cui compressione, rumore e condizioni ambientali avverse, è diventato fondamentale sviluppare metodi di restauro video. Questi metodi mirano a ripristinare la qualità dei video degradati e affrontano una serie di compiti, tra cui la riduzione del rumore, l’ingrandimento, la rimozione dello sfarfallio e la riduzione degli artefatti da compressione. Nonostante i progressi significativi, permangono diverse sfide irrisolte, e affrontarle è essenziale per garantire la robustezza, l’efficienza e l’applicabilità universale dei metodi di restauro video. L'obiettivo di questa tesi è investigare le sfide chiave nel restauro video e proporre metodi innovativi per affrontarle. Inizialmente, viene studiata l'applicazione diretta dei metodi di singola immagine al video. Ciò consente di utilizzare metodi di elaborazione delle immagini ben consolidati per compiti legati al video, ma può introdurre artefatti di sfarfallio, rendendo necessaria l'adozione di metodi di post-elaborazione per il rimozione dello sfarfallio nel video. In questo contesto, una delle principali sfide è ridurre gli artefatti di sfarfallio preservando l’aspetto del video processato per singolo fotogramma. In questa tesi si propone una rete neurale ricorrente con meccanismi di allineamento dei fotogrammi, addestrata con diverse funzioni di perdita per bilanciare il compromesso tra questi due obiettivi contrastanti. I metodi di singola immagine possono anche essere adattati al dominio video introducendo meccanismi che catturano le relazioni temporali tra i fotogrammi. Questo tema è qui investigato nel contesto dell’ingrandimento video, che beneficia in modo specifico dello sfruttamento di queste relazioni tra fotogramma. Nell’ingrandimento video, generare dettagli realistici e temporalmente coerenti è una sfida complessa, ma fondamentale per migliorare la qualità visiva dei fotogrammi ingranditi. In questa tesi, la capacità generativa dei Modelli di Diffusione (MD) viene sfruttata per sintetizzare dettagli realistici, e vengono implementate varie strategie per garantire che questi dettagli siano generati in modo coerente nel tempo. L'alta complessità dei MD permette alte prestazioni di restauro ma li rende inadatti a scenari che richiedono elaborazione efficiente, come lo streaming video. Per questo motivo, vengono esplorate soluzioni alternative con minore complessità e tempi di elaborazione rapidi. In questi scenari, i video sono comunemente compressi per accelerare la trasmissione dei dati, introducendo però artefatti visivi che devono essere rimossi in modo efficiente. Tuttavia, aumentare l’efficienza senza compromettere l’efficacia rappresenta una sfida significativa. In questa tesi si propone un framework che combina tecniche tradizionali di elaborazione delle immagini con reti neurali convoluzionali per rimuovere artefatti di compressione e ripristinare i dettagli dei fotogrammi con un costo computazionale ridotto. Poiché i metodi esistenti si concentrano generalmente sul restauro di un singolo artefatto alla volta, viene infine considerato il compito del restauro video multi-distorto. Infatti, i video reali possono essere simultaneamente affetti da molteplici tipi di artefatti. Ripristinare questi video multi-distorti con un unico modello è una sfida complessa, poiché richiede di adattare dinamicamente il processo di restauro in base ai tipi di artefatto considerati e alle loro rispettive intensità. In questa tesi, si propone un metodo che stima automaticamente l’intensità degli artefatti e li rimuove progressivamente in diverse fasi per ottenere un restauro video unificato.

(2025). Deep learning methods for video restoration. (Tesi di dottorato, , 2025).

Deep learning methods for video restoration

ROTA, CLAUDIO
2025

Abstract

In recent years, the demand for high-quality video content has considerably increased. As video quality is often compromised by various factors, including compression, noise, and adverse environmental conditions, the need for video restoration methods has become crucial. These methods aim to restore the quality of degraded videos and address a range of tasks, including denoising, super-resolution, deflickering, and compression artifact reduction. Although significant progress has been made, several challenges remain unsolved, and addressing them is essential to ensure robustness, efficiency, and ubiquitous applicability of video restoration methods. The objective of this thesis is to investigate key challenges in video restoration and propose novel methods to address them. The direct application of single-image methods to video is initially studied. It enables the use of well-established image processing methods for video-related tasks, but it may introduce flickering artifacts, requiring the use of post-processing methods for video deflickering to remove them. In this context, reducing flickering artifacts while preserving the appearance of the per-frame processed video is one of the main challenges. In this thesis, a recurrent neural network with frame alignment mechanisms and trained with multiple loss functions is proposed to balance the trade-off between these two contrasting goals. Single-image methods can also be adapted to the video domain by introducing mechanisms to capture the temporal relationships among frames. This topic is here investigated in the context of video super-resolution (VSR), which specifically benefits from exploiting these inter-frame relationships. In VSR, generating realistic and temporally-consistent details is a challenging task, but it is also essential to enhance the visual quality of the upscaled frames. In this thesis, the generative capability of Diffusion Models (DMs) is exploited to synthesize realistic details, and various strategies are implemented to ensure these details are consistently generated over time. The high complexity of DMs enables high restoration performance but makes them unsuitable for scenarios where efficient processing is required, such as video streaming. For this reason, alternative solutions featuring reduced complexity and fast processing are also explored. In these scenarios, videos are commonly compressed to speed up data transmission, with a consequence introduction of visual artifacts that must be efficiently removed. However, increasing efficiency without compromising effectiveness is a significant challenge. In this thesis, a framework that combines traditional image processing techniques with convolutional neural networks is proposed to remove compression artifacts and restore frame details with a reduced computational cost. As existing methods typically focus on the restoration of a single artifact at a time, the task of multi-distorted video restoration is finally considered. Indeed, real-world videos may be simultaneously affected by multiple artifact types. Restoring these multi-distorted videos with a single model is a challenging task, as it requires dynamically adapting the restoration process based on the considered artifact types and their respective intensities. In this thesis, a method that automatically estimates the intensity of the artifacts and progressively removes them in distinct stages is proposed to achieve unified video restoration.
PIROLA, YURI
BIANCO, SIMONE
Restauro video; Appr. profondo; Reti neurali; Elaborazione video; Riduzione artefatti
Video restoration; Deep learning; Deep neural networks; Video processing; Artifact reduction
INF/01 - INFORMATICA
English
26-feb-2025
37
2023/2024
open
(2025). Deep learning methods for video restoration. (Tesi di dottorato, , 2025).
File in questo prodotto:
File Dimensione Formato  
phd_unimib_816050.pdf

accesso aperto

Descrizione: Deep learning methods for video restoration
Tipologia di allegato: Doctoral thesis
Dimensione 6.57 MB
Formato Adobe PDF
6.57 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/550722
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact