Image Colorization

Overview:

We are thrilled to share our recent project, where we successfully tackled the challenge of colorizing grayscale images using deep learning methods. Deep neural networks can be difficult to train, especially as their depth increases, which often leads to problems such as vanishing and exploding gradient descent, impacting the accuracy of the network. We followed the section 3.1 CNN Model of Paper.

To overcome these challenges, our team utilized Residual Networks (ResNets) and skip connections in our implementation. We employed an autoencoder, a neural network consisting of an encoder, bottleneck latent space, and decoder, to achieve accurate colorization results. Additionally, we used transfer learning to leverage the knowledge gained from pre-training on grayscale images using ResNet 18 Gray to extract features from grayscale images.

Furthermore, we simplified the colorization process by using LAB Color Space, which allowed the neural network to focus on just two color channels. Please see the attached list of hyperparameters for more details.
After training our model for 50 epochs, we obtained impressive results. The model was able to accurately colorize grayscale images of flowers, with accurate color representations for different parts of the flowers.


​Architecture:

Hyperparameter Details:

HyperparametersDescription
OptimiserAdam Optimiser
Learning Rate0.001
Loss FunctionMSE Loss
Batch Size256

Autoencoders:

Picture

Convolutional Neural Network

Abstract:

The project focused on colorizing grayscale images using deep learning methods, particularly emphasizing the advantages of the CIE-L*A*B* colorspace. They optimized training by leveraging a pre-trained ResNet-18-Gray model, which expedited the process and highlighted the richness of information in the lightness channel alone. After experimenting with different architectures, they settled on a single-stream network and trained it with MSE loss for vibrant colorization results. While quantitative evaluation remained challenging due to colorization ambiguity, they used per-pixel MSE for model comparison. Throughout training, the model progressively learned to assign accurate colors to distinct image regions, demonstrating its capability to capture nuanced color representations.

We followed the section 3.1 CNN Model of Paper.


MEDIA :

http://www.youtube.com/embed/J8a8f-ShQZs?wmode=opaque

Team Members

  1. Gajanan Sapsod
  2. Netra Batwe
  3. Nitish Bodkhe

Mentors

  1. Aditya Rudra
  2. Adityaa Jivoji
  3. Khushi Dave
  4. Rutu Shrirame