adrienchaton

Results 57 comments of adrienchaton

Dear Josipd, I have been doing some checks of gradient calculations/propagations and it seems that when I compute a loss criterion using the torch_two_sample.statistics_diff.MMDStatistic in between a generated batch and...

you can check this https://github.com/xinntao/Real-ESRGAN/blob/master/Training.md you just need to put a folder of raw images, run the multiscale script to rescale (optional), run the script to create the meta data...

Thanks @kodxana for the reply, that is great to hear that there would be such way to do that from x4 checkpoints. However I am not sure how this works,...

I have the default gt_size: 256 and in L1 loss pred is of shape 128 and target gt is of shape 256 it means the generator is still doing x4...

okay @kodxana, please could you maybe send me an example .yml option file that you use for finetuning from RealESRGAN_x4plus checkpoint to x8 ? then I can double check if...

> Can you send an example of your RealESRGAN_x4plus for x4? please read the training.md, there is about nothing to change just give the values you want to name, dataroot_gt,...

hello @xinntao ; if you have a moment could you please explain how to finetune at x8 from your x4 checkpoint without using a paired dataset but by generating training...

I didnt get to finetune a x4 model to x8 and I dont think it is actually possible. It would be nice to have other people feedbacks but I will...

It seems like, at line 208 (realsergan/models/realesrgan_model.py in optimize_parameters) self.output = self.net_g(self.lq) and the generated output has shape 128 which is x4, even after I set scale to 8. Whereas...

not for ESM-IF, I am still not starting this experiment but I guess either I will write my own trainer for ESM-IF or check some other IF models with available...