HantingChen

Results 105 comments of HantingChen

We use the DIV2K to fune-tune SR task

> I want to ask in Fig. 6 of the paper, what is the task, dataset and setting when you compare the IPT with other CNNs. We compared them in...

> It seems that the Fig. 6 in arxiv v1 and CVPR21 are different. Sorry for the mistake. We compared them in Set 5 for 2x SR in arxiv v1,...

The reason to use the "forward_chop" func: The IPT model can only support a fixed input size (48*48 for training and testing), While in image processing task, the input images...

> Thanks for your reply! > > And I have another question. Is this process of "chop these images into 4848 size and put them into the IPT model" needed...

The model has 33G FLOPs and 454MB storage cost.

你好,mindspore版本的模型和pytorch版本暂时不互通 。

> > 感谢您的回复!我想在新的数据集上进行finetune,需要mindspore版本的pretrain模型,没有在mindspore hub里找到pretrain的模型。 > > 我将pytorch的pretrain模型进行了修改,mindspore可以加载了,但finetune的过程还是有问题,loss是nan。请问模型修改之后可以正常finetune吗? 可以的,loss是nan应该是修改pretrain模型有问题,也可以尝试自己pretrain一个模型。

You can directly renname the checkpoint file as .pt instead of .ckpt.