zoom-learn-zoom icon indicating copy to clipboard operation
zoom-learn-zoom copied to clipboard

Training Code Not Found

Open ZeeshanNadir opened this issue 5 years ago • 20 comments

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?

Thanks

ZeeshanNadir avatar Jun 26 '19 00:06 ZeeshanNadir

+1

zhLawliet avatar Jul 16 '19 12:07 zhLawliet

+1

qq286838947 avatar Jul 18 '19 11:07 qq286838947

Would you kindly provide the code of training?

Chokurei avatar Jul 26 '19 09:07 Chokurei

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?

Thanks

I used the train.py script in commit 1ebbdf6657ebbbb8a254c20e3e84dc5b6ddfa6e2 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.

So, I think it is impossible to repetition this paper without original training codes.

bai-shang avatar Jul 29 '19 09:07 bai-shang

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.

So, I think it is impossible to repetition this paper without original training codes.

If you follow the rough alignment scripts and apply the computed matrices correctly during training, you should be able to get similar results as I showed in the paper. It's not trivial and I haven't finished cleaning up all the util functions.

People have emailed me about small artifacts and details about training parameters, given that they were able to re-implement the paper and get close results from what I've shown. If you just use the old training code without changing anything, I won't be surprised it's not converging.

ceciliavision avatar Aug 22 '19 17:08 ceciliavision

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.

So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

llp1996 avatar Aug 26 '19 06:08 llp1996

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

Chokurei avatar Aug 26 '19 06:08 Chokurei

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0cobi_vgg+1.0cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

llp1996 avatar Aug 26 '19 08:08 llp1996

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0_cobi_vgg+1.0_cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right?

Chokurei avatar Aug 26 '19 08:08 Chokurei

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0_cobi_vgg+1.0_cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right?

i change the description , i have used 0.5 and 0.8 . but i align the image use "main_align.sh ","main_crop.sh" and "main_wb.sh"

llp1996 avatar Aug 26 '19 08:08 llp1996

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result.

bai-shang avatar Oct 09 '19 02:10 bai-shang

@bai-shang you train the model used raw data or RGB data ?

yanmenglu avatar Oct 09 '19 08:10 yanmenglu

@bai-shang can you share your train.py ?Thank you.

qianzhang2018 avatar Oct 09 '19 08:10 qianzhang2018

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0_cobi_vgg+1.0_cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right?

i change the description , i have used 0.5 and 0.8 . but i align the image use "main_align.sh ","main_crop.sh" and "main_wb.sh"

May I ask how you use the tform.txt and wb.txt during training?

IanYeung avatar Oct 22 '19 07:10 IanYeung

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset?

WenjiaWang0312 avatar Oct 24 '19 15:10 WenjiaWang0312

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset?

i use SR_RAW ,use ECC align first

llp1996 avatar Nov 20 '19 01:11 llp1996

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset?

i use SR_RAW ,use ECC align first

thank you, I have found the released script as well.

WenjiaWang0312 avatar Nov 20 '19 01:11 WenjiaWang0312

+1

CV-JunchengLi avatar Mar 22 '20 02:03 CV-JunchengLi

Hi, I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation? Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code .....

Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result.

Would you kindly provide the code you train? Thanks!

wioponsen avatar Aug 06 '20 02:08 wioponsen

Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result.

Is that possible to share your code? Actually I also did that but find some obvious artifacts in some regions. Thank you in advance.

Chokurei avatar Aug 06 '20 03:08 Chokurei