Grayscale Image Colorization Method Based on U-Net Network

PDF (1009KB), PP.70-82

Views: 0 Downloads: 0

Author(s)

Zhengbing Hu 1 Oksana Shkurat 2,* Maksym Kasner 3

1. School of Computer Science, Hubei University of Technology, Wuhan, 430079, China

2. Department of Computer Systems Software, Igor Sikorsky Kyiv Polytechnic Institute, Kyiv, 03056, Ukraine

3. Igor Sikorsky Kyiv Polytechnic Institute /Department of Computer Systems Software, Kyiv, 03056, Ukraine

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2024.02.06

Received: 10 Oct. 2023 / Revised: 26 Nov. 2023 / Accepted: 5 Jan. 2024 / Published: 8 Apr. 2024

Index Terms

Image Colorization, Convolution Neural Network (CNN), U-Net Network, Gray Level Quantization, Grayscale Image

Abstract

A colorization method based on a fully convolutional neural network for grayscale images is presented in this paper. The proposed colorization method includes color space conversion, grayscale image preprocessing and implementation of improved U-Net network. The training and operating of the U-Net network take place for images represented in the space of the Lab color model. The trained U-Net network integrates realistic colors (generate data of a and b components) into grayscale images based on L-component data of the Lab color model. Median cut method of quantization is applied to L-component data before the training and operating of the U-Net network. Logistic activation function is applied to normalized results of convolution layers of the U-Net network. The proposed colorization method has been tested on ImageNet database. The evaluation results of the proposed method according to various parameters are presented. Colorization accuracy by the proposed method reachers more than 84.81%. The colorization method proposed in this paper is characterized by optimized architecture of convolution neural network that is able to train on a limited image set with a satisfactory training duration. The proposed colorization method can be used to improve the image quality and restoring data in the development of computer vision systems. The further research can be focused on the study of a technique of defining optimal number of the gray levels and the implementation of the combined quantization methods. Also, further research can be focused on the use of HSV, HLS and other color models for the training and operating of the neural network.

Cite This Paper

Zhengbing Hu, Oksana Shkurat, Maksym Kasner, "Grayscale Image Colorization Method Based on U-Net Network", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.16, No.2, pp. 70-82, 2024. DOI:10.5815/ijigsp.2024.02.06

Reference

[1]Z. Wang, H. Li, and X. Zhang X, “Construction waste recycling robot for nails and screws: Computer vision technology and neural network approach”, Automation in Construction, Vol. 978, pp. 220-228, 2019.
[2]T.A. Alani, M.Q. Abbood, K.A. Abbar, “Computer Vision-Based System for Classification and Sorting Color Objects,” IOP Conference Series Materials Science and Engineering, Vol. 745, 2020.
[3]O. Shkurat, Y. Sulema, V. Suschuk-Sliusarenko, A. Dychka, “Image Segmentation Method Based on Statistical Parameters of Homogeneous Data Set,” International Conference of Artificial Intelligence, Medical Engineering, Education, p.271-281, 2018.
[4]O.S. Shkurat, Ye.S. Sulema, A.I. Dychka, “Complicated Shapes Estimation Method for Objects Analysis in Video Surveillance System,” KPI Science News, pp. 53–62, 2018.
[5]V.E. Neagoe, “An Optimum 2D Color Space for Pattern Recognition,” International Conference on Image Processing, Computer Vision, & Pattern Recognition, Las Vegas, USA, 26–29 June 2006, Vol. 2, 2006.
[6]Y. Tayal, R. Lamba, S. Padhee, “Automatic face detection using color based segmentation,” International Journal of Scientific Research, Vol. 2, pp. 1-7, 2012.
[7]Fabien Pierre, Jean-François Aujol, “Recent Approaches for Image Colorization,” in Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging. Springer International Publishing, 2020.
[8]A. Bugeau, V.-T. Ta, N. Papadakis, “Variational Exemplar-Based Image Colorization,” IEEE Transactions on Image Processing, Vol.23(1), pp. 298-307, 2014.
[9]T. Welsh, M. Ashikhmin, K. Mueller, “Transferring color to greyscale images,” ACM Transactions on Graphics (TOG), Vol. 21(3), pp. 277-280, 2002.
[10]J. Persch, F. Pierre, G. Steidl, “Exemplar-based face colorization using image morphing,” Journal of Imaging, Vol. 3(4), 2017.
[11]M. Kawulok and B. Smolka, “Competitive image colorization,” IEEE International Conference on Image Processing (ICIP), p. 405–408, 2010.
[12]A. Levin, D. Lischinski, and Y. Weiss, “Colorization using optimization,” ACM Transactions on Graphics, Vol. 23(3), pp. 689-694, 2004.
[13]R. Irony, D. Cohen-Or, D. Lischinski, “Colorization by example,” Eurographics Symp. On Rendering, Vol. 2, 2005.
[14]Y. Morimoto, Y. Taguchi, and T. Naemura, “Automatic colorization of grayscale images using multiple images on the web,” ACM Transactions on Graphics, 2009.
[15]G. Charpiat, M. Hofmann, and B. Schölkopf, “Automatic image colorization via multimodal predictions,” European Conference on Computer Vision (ECCV), 2008.
[16]R.K. Gupta, A.Y.S. Chia, D. Rajan, E.S. Ng, H. Zhiyong, “Image colorization using similar images,” ACM International Conference on Multimedia, pp. 369–378, 2012.
[17]A. Deshpande, J. Rock, and D.A. Forsyth, “Learning large-scale automatic image colorization,” International Conference on Computer Vision (ICCV), 2015.
[18]T. Welsh, M. Ashikhmin, K. Mueller, “Transferring color to greyscale images,” ACM Transactions on Graphics, Vol. 21(3), pp. 277–280, 2002.
[19]R. Zhang, P. Isola, A.A. Efros, “Colorful image colorization,” European Conference on Computer Vision, pp. 1–16, 2016.
[20]S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” ACM Transactions on Graphics, 2016.
[21]G. Larsson, M. Maire, and G. Shakhnarovich, “Learning representations for automatic colorization,” European Conference on Computer Vision (ECCV), 2016.
[22]A. Royer, A. Kolesnikov, C.H. Lampert, “Probabilistic image colorization,” British Machine Vision Conference, 2017.
[23]P. Isola, J.-Y. Zhu, T. Zhou, and A.A. Efros. “Image-to-image translation with conditional adversarial networks,” Computer Vision and Pattern Recognition (CVPR), 2017.
[24]Y. Cao, Z. Zhou, W. Zhang, and Y. Yu, “Unsupervised diverse colorization via generative adversarial networks,”, 2017.
[25]F. Baldassarre, D.G. Morin, and L. Rodes-Guirao, “Deep Koalarization: Image Colorization using CNNs and Inception-Resnet-v2,”, 2017.
[26]K. Nazeri, E. Ng, and M. Ebrahimi, “Image Colorization using Generative Adversarial Networks,”, 2018.
[27]K.Hong, W. Li, C. Yang, M. Zhang, Y. Wang, and Q. Liu, “Joint Intensity-Gradient Guided Generative Modeling for Colorization,” IEEE Transactions on Image Processing, 2020.
[28]OpenCV Documentation. URL: https://docs.opencv.org/3.4/. 
[29]P. Heckbert, “Color image quantization for frame buffer display,” Computer Graphics, vol. 16(3), pp. 297-307, 1982.
[30]ImageNet database. URL: https://www.image-net.org/.