NAFNet
NAFNet copied to clipboard
模型导出为onnx
请问下NAFNet可以导出为onnx格式的模型吗?
HI, trexliu, 您好 这边框架暂时没有支持该功能的计划, 不过非常欢迎提PR/或者提供一个链接(我们可以在readme中添加链接)
感谢!
降噪和去雾看起来很惊艳,个人尝试转换失败,请求转换为onnx或libtorch,这样可以作用于实际应用。
I rewrite norm1 and norm2 in NAFNet_arch.py
#self.norm1 = LayerNorm2d(c) #self.norm2 = LayerNorm2d(c) self.norm1 = torch.nn.LayerNorm(c) self.norm2 = torch.nn.LayerNorm(c) def forward(self, inp): x = inp x = torch.permute(x, (0, 3, 2, 1)) x = self.norm1(x) x = torch.permute(x, (0, 3, 2, 1)) x = self.conv1(x) x = self.conv2(x) x = self.sg(x) x = x * self.sca(x) x = self.conv3(x)
x = self.dropout1(x)
y = inp + x * self.beta
yy = torch.permute(y, (0, 3, 2, 1))
yy = self.norm2(yy)
x = torch.permute(yy, (0, 3, 2, 1))
x = self.conv4(x)
#x = self.conv4(self.norm2(y))
x = self.sg(x)
x = self.conv5(x)
x = self.dropout2(x)
return y + x * self.gamma
dummy_input = torch.randn(1, 3, 256, 256, device="cuda") input_names = ["actual_input_1"] + ["learned_%d" % i for i in range(16)] output_names = ["output1"] torch.onnx.export(net, dummy_input, "NAFNet.onnx", verbose=True, input_names=input_names, output_names=output_names, opset_version=11)
By using the above code, I convert this model to onnx.
I rewrite norm1 and norm2 in NAFNet_arch.py
#self.norm1 = LayerNorm2d(c) #self.norm2 = LayerNorm2d(c) self.norm1 = torch.nn.LayerNorm(c) self.norm2 = torch.nn.LayerNorm(c) def forward(self, inp): x = inp x = torch.permute(x, (0, 3, 2, 1)) x = self.norm1(x) x = torch.permute(x, (0, 3, 2, 1)) x = self.conv1(x) x = self.conv2(x) x = self.sg(x) x = x * self.sca(x) x = self.conv3(x)
x = self.dropout1(x) y = inp + x * self.beta yy = torch.permute(y, (0, 3, 2, 1)) yy = self.norm2(yy) x = torch.permute(yy, (0, 3, 2, 1)) x = self.conv4(x) #x = self.conv4(self.norm2(y)) x = self.sg(x) x = self.conv5(x) x = self.dropout2(x) return y + x * self.gammadummy_input = torch.randn(1, 3, 256, 256, device="cuda") input_names = ["actual_input_1"] + ["learned_%d" % i for i in range(16)] output_names = ["output1"] torch.onnx.export(net, dummy_input, "NAFNet.onnx", verbose=True, input_names=input_names, output_names=output_names, opset_version=11)
By using the above code, I convert this model to onnx.
您好,您可以详细说一下您导出的过程嘛,在加载网络net,网络net是怎么定义的,是按照模型model=create_model(opt)嘛,basicsr.models.image_restoration_model.ImageRestorationModel(),这个是模型使用的网络,加入这个,报错误AttributeError: 'function' object has no attribute 'training',希望能得到您的回答。
Hello, can you elaborate on your export process,When loading network net, how is network net defined,According to the model model=create_model(opt),basicsr.models.image_restoration_model.ImageRestorationModel(),According to the model, this is the network used by the model,Join this,Error reporting,AttributeError: 'function' object has no attribute 'training',I hope to get your answer.
I first create a layer: class PermuteLayer(nn.Module): def init(self, **kwargs): super(PermuteLayer, self).init(**kwargs) def forward(self, x): return torch.permute(x, (0, 3, 2, 1))
self.permute = PermuteLayer()
and then use this layer to NAFBlock class NAFBlock(nn.Module): forward:(self,inp) x = inp x = self.permute(x) x = self.norm1(x) x = self.permute(x) .... return y + x * self.gamma
net = NAFNet(img_channel=img_channel, width=width, middle_blk_num=middle_blk_num, enc_blk_nums=enc_blks, dec_blk_nums=dec_blks)
I just test this model by this "dummy_input = torch.randn(1, 3, 256, 256, device="cuda")" You can get the summary of the Net by" summary(net , (3,256,256))"
I have not use the model to train, but I will begin later.
I first create a layer: class PermuteLayer(nn.Module): def init(self, **kwargs): super(PermuteLayer, self).init(**kwargs) def forward(self, x): return torch.permute(x, (0, 3, 2, 1))
self.permute = PermuteLayer()
and then use this layer to NAFBlock class NAFBlock(nn.Module): forward:(self,inp) x = inp x = self.permute(x) x = self.norm1(x) x = self.permute(x) .... return y + x * self.gamma
net = NAFNet(img_channel=img_channel, width=width, middle_blk_num=middle_blk_num, enc_blk_nums=enc_blks, dec_blk_nums=dec_blks)
I just test this model by this "dummy_input = torch.randn(1, 3, 256, 256, device="cuda")" You can get the summary of the Net by" summary(net , (3,256,256))"
I have not use the model to train, but I will begin later.
我昨天尝试成功了, 我是把模型的网络单独拿出来了,和torch官网给的打包onnx方式一样,我进行了测试,是可以的。谢谢您。我正打算将图片的大小换成(720,1280)但我本机的显存不够,打算在服务器上试验,不过(256, 256)是成功的。
I tried it successfully yesterday. I took out the network of the model separately. It is the same as the packaging onnx method given by the torch official website. I tested it and it is ok. Thank you. I am planning to change the size of the picture to (720, 1280) but my local video memory is not enough, I plan to test it on the server, but (256, 256) is successful.
I first create a layer: class PermuteLayer(nn.Module): def init(self, **kwargs): super(PermuteLayer, self).init(**kwargs) def forward(self, x): return torch.permute(x, (0, 3, 2, 1))
self.permute = PermuteLayer()
and then use this layer to NAFBlock class NAFBlock(nn.Module): forward:(self,inp) x = inp x = self.permute(x) x = self.norm1(x) x = self.permute(x) .... return y + x * self.gamma
net = NAFNet(img_channel=img_channel, width=width, middle_blk_num=middle_blk_num, enc_blk_nums=enc_blks, dec_blk_nums=dec_blks)
I just test this model by this "dummy_input = torch.randn(1, 3, 256, 256, device="cuda")" You can get the summary of the Net by" summary(net , (3,256,256))"
I have not use the model to train, but I will begin later.
不好意思,再次打扰到您,在转完onnx以后,用onnx做验证时候,模型出来的是array,然后经过以下几步:
out_tensor = torch.from_numpy(onnx_out[0])
sr_img = tensor2img([out_tensor])
imwrite(sr_img, output_path)
结果是有,但是图像出来的效果和原模型不一样,同时还有一个个的网格

Sorry to bother you again, after turning onnx, when using onnx for verification, the model comes out as an array, and then goes through the following steps:
out_tensor = torch.from_numpy(onnx_out[0])
sr_img = tensor2img([out_tensor])
imwrite(sr_img, output_path)
The result is yes, but the effect of the image is different from the original model, and there are also grids one by one

In my opinion, in order to convert the pytorch model to onnx sucessfuly, the model should be rewriten. To get the right image result , this model should be retrained. After the trainning process is over, you can convert the trained model to onxx. Maybe you will see the right image. I haven't train the model yet, you can have a try.
@hzk7287 Here is the solution, separate the model and preprocessing operations, the conversion will be easy and successful https://blog.csdn.net/TF666666/article/details/125678629?spm=1001.2014.3001.5502
In my opinion, in order to convert the pytorch model to onnx sucessfuly, the model should be rewriten. To get the right image result , this model should be retrained. After the trainning process is over, you can convert the trained model to onxx. Maybe you will see the right image. I haven't train the model yet, you can have a try.
问题已经解决,谢谢您!
Problem solved, thank you!
In my opinion, in order to convert the pytorch model to onnx sucessfuly, the model should be rewriten. To get the right image result , this model should be retrained. After the trainning process is over, you can convert the trained model to onxx. Maybe you will see the right image. I haven't train the model yet, you can have a try.
问题已经解决,谢谢您!
Problem solved, thank you! You're welcome !
In my opinion, in order to convert the pytorch model to onnx sucessfuly, the model should be rewriten. To get the right image result , this model should be retrained. After the trainning process is over, you can convert the trained model to onxx. Maybe you will see the right image. I haven't train the model yet, you can have a try.
问题已经解决,谢谢您!
Problem solved, thank you!
请问是怎么解决的啊
In my opinion, in order to convert the pytorch model to onnx sucessfuly, the model should be rewriten. To get the right image result , this model should be retrained. After the trainning process is over, you can convert the trained model to onxx. Maybe you will see the right image. I haven't train the model yet, you can have a try.
问题已经解决,谢谢您! Problem solved, thank you!
请问是怎么解决的啊
参考链接 https://blog.csdn.net/TF666666/article/details/125678629?spm=1001.2014.3001.5502
Hi is there a blog to convert the model from onnx to tensorRT format ?