unprocessing
unprocessing copied to clipboard
Error when denoising full-resolution images on the Darmstadt dataset
Hello and first of all thanks for the great repository! I was trying to run the dnd_denoise.py script on the full-resolution images from Darmstads dataset, but I have encountered the following error:
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,252,190,512] vs. shape[1] = [1,252,189,256] [[node decoder/skip_connection/concat (defined at dnd_denoise.py:174) ]] [[node denoised_img (defined at dnd_denoise.py:174) ]]
Do you know what might have cause this? For reference, I am using the unprocessing_srgb_loss model and I have just removed the bounding boxes and used the full-resolution noisy image.
Hello and first of all thanks for the great repository! I was trying to run the dnd_denoise.py script on the full-resolution images from Darmstads dataset, but I have encountered the following error:
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,252,190,512] vs. shape[1] = [1,252,189,256] [[node decoder/skip_connection/concat (defined at dnd_denoise.py:174) ]] [[node denoised_img (defined at dnd_denoise.py:174) ]]
Do you know what might have cause this? For reference, I am using the unprocessing_srgb_loss model and I have just removed the bounding boxes and used the full-resolution noisy image.
Can you share with me DND dataset, thank you!