Here, I implement an Image Colorization model trained and evaluated through simple Huber Loss. Some example predictions of the model can be seen in the notebook. I have also included a paper describing what all I did, including some result analysis.
This model was heavily inspired by the works listed here:
- Jeff Hwang and You Zhou. Image colorization with deep convolutional neural networks, 2016. http://cs231n.stanford.edu/reports/2016/pdfs/219_Report.pdf.
- Ryan Dahl. Automatic colorization, 2016. https://tinyclouds.org/colorize/.
I obtained my dataset from the following sources:
- Aditya Jain. Flickr 8k dataset, 2020. https://www.kaggle.com/adityajn105/flickr8k.
- Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. Collecting image annotations using Amazon’s Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 139–147, Los Angeles, June 2010. Association for Computational Linguistics. Original Producers of Dataset Posted by Aditya Jain.
This repo also referenced and used some materials from UCSD's ECE 176 class.