To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded ...To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.展开更多
Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and...Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.展开更多
Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more for...Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more formidable obstacles due to the additional necessity for temporal consistency.Moreover,there is rarely a systematic review of video colorization methods.In this paper,we aim to review existing state-of-the-art video colorization methods.In addition,maintaining spatial-temporal consistency is pivotal to the process of video colorization.To gain deeper insight into the evolution of existing methods in terms of spatial-temporal consistency,we further review video colorization methods from a novel perspective.Video colorization methods can be categorized into four main categories:optical-flow based methods,scribble-based methods,exemplar-based methods,and fully automatic methods.However,optical-flow based methods rely heavily on accurate optical-flow estimation,scribble-based methods require extensive user interaction and modifications,exemplar-based methods face challenges in obtaining suitable reference images,and fully automatic methods often struggle to meet specific colorization requirements.We also discuss the existing challenges and highlight several future research opportunities worth exploring.展开更多
文摘To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.
基金supported by grants from the National Natural Science Foundation of China(61906184)the Joint Lab of CAS–HK,and the Shanghai Committee of Science and Technology,China(20DZ1100800,21DZ1100100).
文摘Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.
基金supported by the National Natural Science Foundation of China under Grant Nos.U22B2049 and 62332010.
文摘Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more formidable obstacles due to the additional necessity for temporal consistency.Moreover,there is rarely a systematic review of video colorization methods.In this paper,we aim to review existing state-of-the-art video colorization methods.In addition,maintaining spatial-temporal consistency is pivotal to the process of video colorization.To gain deeper insight into the evolution of existing methods in terms of spatial-temporal consistency,we further review video colorization methods from a novel perspective.Video colorization methods can be categorized into four main categories:optical-flow based methods,scribble-based methods,exemplar-based methods,and fully automatic methods.However,optical-flow based methods rely heavily on accurate optical-flow estimation,scribble-based methods require extensive user interaction and modifications,exemplar-based methods face challenges in obtaining suitable reference images,and fully automatic methods often struggle to meet specific colorization requirements.We also discuss the existing challenges and highlight several future research opportunities worth exploring.