CLCC: Contrastive Learning for Color Constancy (CVPR 2021)
Yi-Chen Lo*, Chia-Che Chang*, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, Kevin Jou
MediaTek Inc., Hsinchu, Taiwan
(*) indicates equal contribution.
Paper | Poster | 5-min Video | 5-min Slides | 10-min Slides
Dataset
We preprocess each fold of dataset and stored in .pkl format for each sample. Each sample contains:
- Raw image: Mask color checker; Subtract black level; Convert to uint16 [0, 65535] BGR numpy array with shape (H, W, 3).
- RGB label: L2-normalized numpy vector with shape (3,).
- Color checker: [0, 4095] BGR numpy array with shape (24, 3) for raw-to-raw mapping presented in our paper (see
util/raw2raw.pyand also section 4.3 in our paper). A few of them are stored in all zeros due to the failure of color checker detection. Note that we convert it into RGB format during preprocessing indataloader.py, and our raw-to-raw mapping algorithm also manipulates it in RGB format.
Training and Evaluation
CLCC is a Python 3 & TensorFlow 1.x implementation based on FC4 codebase.
-
Dataset preparation: Download preprocessed dataset here. Please make sure your dataset folder is structured as
<DATA_DIR>/<DATA_NAME>/<FOLD_ID>(e.g.,data/gehler/0, just like how it is structured in download source). -
Pretrained weights preparation: Download ImageNet-pretrained weights here. Place pretrained weight files under
pretrained_models/imagenet/. -
Training: Modify
config.py(i.e., you may want to renameEXP_NAMEand specify training dataDATA_NAME,TRAIN_FOLDS,TEST_FOLDS) and executetrain.py. Checkpoints will be saved underckpts/EXP_NAMEduring training. -
Evaluation: Once training is done, you can evaluate checkpoint with
eval.pyon a specific test fold. We recommend to refer toscripts/eval_squeezenet_clcc_gehler.shfor 3-fold cross-validation.
Acknowledgments
- FC4: https://github.com/yuanming-hu/fc4.
- Color checker detection: https://github.com/colour-science/colour-checker-detection. To increase detection accuracy, performing homography with color checker coordinates provided by the original dataset can help a lot.
Citation
@InProceedings{Lo_2021_CVPR,
author = {Lo, Yi-Chen and Chang, Chia-Che and Chiu, Hsuan-Chao and Huang, Yu-Hao and Chen, Chia-Ping and Chang, Yu-Lin and Jou, Kevin},
title = {CLCC: Contrastive Learning for Color Constancy},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {8053-8063}
}
