Abstract
Video colorization is a challenging and highly ill-posed problem. Although recent years have witnessed remarkable progress in single image colorization, there is relatively less research effort on video colorization, and existing methods always suffer from severe flickering artifacts (temporal inconsistency) or unsatisfactory colorization. We address this problem from a new perspective, by jointly considering colorization and temporal consistency in a unified framework. Specifically, we propose a novel temporally consistent video colorization (TCVC) framework. TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization. Furthermore, TCVC introduces a self-regularization learning (SRL) scheme to minimize the differences in predictions obtained using different time steps. SRL does not require any ground-truth color videos for training and can further improve temporal consistency. Experiments demonstrate that our method can not only provide visually pleasing colorized video, but also with clearly better temporal consistency than state-of-the-art methods. A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE , while code is available at https://github.com/lyh-18/TCVC-Temporally-Consistent-Video-Colorization .[Figure not available: see fulltext.].
Original language | English |
---|---|
Pages (from-to) | 375-395 |
Number of pages | 21 |
Journal | Computational Visual Media |
Volume | 10 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2023, The Author(s).
ASJC Scopus Subject Areas
- Computer Vision and Pattern Recognition
- Computer Graphics and Computer-Aided Design
- Artificial Intelligence
Keywords
- feature propagation
- self-regularization
- temporal consistency
- video colorization