colorization
colorization copied to clipboard
get colored images from my pretrained caffemodel
hello,thanks for your work. I want to use the pre-trained caffemodel that training from my own dataset to output the colored images.First I find colorize.py( Test-Time Python Script) and replace default model(colorization_release_v2.caffemodel) with my caffemodel(e.g. colornet_iter_51000.caffemodel) , but I found my caffemodel's network structure is different from the default model's, so the output image seems to be wrong. what should I do? If you have free time,please answer my question,thanks!
Hi, did you use the correct "prototxt" file? This file contains the architecture details
yes, I also replaced the pre-trained "prototxt" file with the training "prototxt" file, but another problem appeared,training pipeline can't output the colored images
I know. I had the same problem with demo_release.py (Pytorch version), but when I used this command: python ./colorize.py -img_in ./demo/imgs/ILSVRC2012_val_00041580.JPEG -img_out ./out.png and replaced the .caffemodel and .prototxt file inside the colorize.py, my problem solved.
I replaced the corresponding part in the colorize.py with ./models/colorization_train_val_v2.prototxt and colornet_iter_135000.caffemodel(caffemodel saved in training process). but the output image is from net.blobs['class8_ab'].data in colorize.py. this 'class8_ab' blobs is not in the structure of training caffemodel. Will the output result be correct? Can you tell me which file you replaced?
I use "./models/colorization_deploy_v2.prototxt" for .prototxt file. I think this file contains the lastest architecture updates. colorization_train_val_v2.prototxt is for training and validating not for testing. For testing the model on a single image you should use ./models/colorization_deploy_v2.prototxt file
so you use "./models/colorization_deploy_v2.prototxt" and the model you train(not default='./models/colorization_release_v2.caffemodel') and you get the right result?i just hope that the colored images I get are trained on the model with fine-tuned parameters based on my own dataset.
Yes, exactly. I also trained the model on another dataset. No worry!
I'm sorry to bother you again,I don't know if there was a problem with the pre-processing data in training process(I processed my images dataset into LMDB files,BGR images),I ran the colorize.py as you did, the output image looks like a BGR picture, not a RGB image! Have you encountered such a situation? Can you tell me how you preprocessed your data?
This is the result i ran colorize.py:
No worry. It is interesting. No, I did not face this problem. Here is the script I used to prepare the LMDB dataset.
img_list = sorted(glob.glob(read_path))
dataset = []
data_size = 0
for i, v in enumerate(img_list):
img = cv2.imread(v, cv2.IMREAD_UNCHANGED)
dataset.append(img)
data_size += img.nbytes
print("read {}".format(v))
print(len(img_list))
env = lmdb.open(save_path, map_size=data_size * 10)
with env.begin(write=True) as txn:
# txn is a Transaction object
for i, v in enumerate(img_list):
base_name = os.path.splitext(os.path.basename(v))[0]
key = base_name.encode('ascii')
data = dataset[i].tobytes()
if dataset[i].ndim == 2:
H, W = dataset[i].shape
C = 1
else:
H, W, C = dataset[i].shape
datum = caffe.proto.caffe_pb2.Datum()
datum.channels = C
datum.height = H
datum.width = W
datum.data = data
txn.put(key, datum.SerializeToString())
thanks,I read your script. it seems that you did not preprocess the training data into BGR images,so this can directly get the correct result.
Your welcome. I used the BGR format. When you read an image using OpenCV, its channel order is BGR, not RGB; that means my LMDB contains images in BGR format.
oh! yes, thanks a lot!
Good luck!