CGIntrinsics
CGIntrinsics copied to clipboard
How to test on a single image?
Hi, thank you for your work, I like the results. We are thinking about utilizing the model to improve the registration pipeline for pairs of images. However, I am not successful in using the code for testing on our dataset. I would find it very useful if you could provide some info about how to get predictions from inputs which are not in the IIW/SAW dataset format - e.g. how to get an output from a single image.
Best regards, Milan Pultar
Hi, thank you for your work, I like the results. We are thinking about utilizing the model to improve the registration pipeline for pairs of images. However, I am not successful in using the code for testing on our dataset. I would find it very useful if you could provide some info about how to get predictions from inputs which are not in the IIW/SAW dataset format - e.g. how to get an output from a single image.
Best regards, Milan Pultar
Did you manage to make it work on single image? Can you share any script for others to use?
No, sorry. I haven't paid much attention to this since I wrote this question. But it looks like it should not be too hard to create such script.
I succeeded in testing on a single image with the provided pre-trained model, but found severe checkerboard artifacts. It may be because of the image pre-process.
I can load pretrained model. However, the outputs are two (single) channel images. Minimal code to import pretrained model:
from CGIntrinsics.models.intrinsic_model import * from CGIntrinsics.models.networks import * from CGIntrinsics.models.base_model import * cgnet = define_G(input_nc=3, output_nc=3, ngf=64, which_model_netG="unet_256") cgnet.load_state_dict(torch.load("pretrained_models/cgintrinsics_iiw_saw_final_net_G.pth")) cgnet = cgnet.cuda(); cgnet.eval();
You can read this comment to get reflectance/albedo and shading https://github.com/zhengqili/CGIntrinsics/issues/1#issuecomment-580114465
I succeeded in testing on a single image with the provided pre-trained model, but found severe checkerboard artifacts. It may be because of the image pre-process.
I believe the checkerboard artifacts are caused by the deconv layers and the unreasonable implementation of computing image gradient.
I removed all the checkerboard artifacts by addressing the above two problems.
I succeeded in testing on a single image with the provided pre-trained model, but found severe checkerboard artifacts. It may be because of the image pre-process.
I personally think the checkerboard artifacts are caused by the deconv layers and the unreasonable implementation of computing image gradient.
I removed all the checkerboard artifacts by addressing the above two problems.
Hello, could you please share how you address the two problems ? Thank you!
I succeeded in testing on a single image with the provided pre-trained model, but found severe checkerboard artifacts. It may be because of the image pre-process.
I personally think the checkerboard artifacts are caused by the deconv layers and the unreasonable implementation of computing image gradient. I removed all the checkerboard artifacts by addressing the above two problems.
Hello, could you please share how you address the two problems ? Thank you!
- Replace deconv layers with upsample. This can reduce artifacts but lead to larger computation.
- It's hard to explain why the implementation of computing image gradients is unreasonable. But please compute the image gradient by
x_(i+1) - x_i, rather thanx_(i+2) - x_i. Skipping the pixelx_(i+1)leads to artifacts.
This is my NIID-Net project. Hope the modified loss functions can help you.