Gavin666Github

Results 18 comments of Gavin666Github

Have anyone come accross this phenomenon? size of my training datasets is 20000 pairs my configuration with params: --batchSize 64 --no_html --nThreads 16 --niter 800 --niter_decay 8000 --epoch_count 729 --gpu_ids...

@zhujingsong hi,jing song,"We use list files in data/ucf101/ subdir to make the code find RGB images and flow data saved on disk" ,have you solved the problem? if YES,please share...

@XudongLinthu hello,xu,I'am in China but i can't download the link shared.would you please share the datasets ![image](https://user-images.githubusercontent.com/35061171/48989222-43201600-f164-11e8-9260-e3071eb48139.png) with baidu cloud disk to me?many thanks!

@Rhythmblue aha~~ It's so big. It's going to be very slow to download. Thank you for your advice.I could try.

我也遇到了这个错误,请问解决了吗

我没有训练baseline,直接开始稀疏化训练,有问题吗

@JimXiongGM 你说的太对了,超分这个课题都是这么玩的,从高清图按一定方法下采样得到LR图,然后再训练下返回得到高清图,再算个psnr 、ssim什么的刷分数,OK可以提交paper了,问题是这些对数据有严重的依赖性,对下采样方法也有局限和依赖,事实上真正低分辨率的图片本身信息量不足,很可能不是按照常规下采样方法就能得到同样质量的照片,问题就在这里,用这个LR超分的时候,得到的HR可能就会气死你

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) 报错从这开始,上述代码放在这里没用的

@SumiHui @hezhangsprinter Thank you very much! I believe he will be happy and willing for you to share the link to others because it's benefit to more people.

mutual learling 需要在两个GPU跑,一个跑不起来,内存没有及时释放,慢慢的显存爆了