GFPGANv1.3-to-ncnn icon indicating copy to clipboard operation
GFPGANv1.3-to-ncnn copied to clipboard

How to use the exported GFPGAN onnx

Open ssometimes opened this issue 2 years ago • 4 comments

Hello, I have not used onnx before, but I need to operate in such an environment, and would like to ask if there is a code example to use. thank you very much

ssometimes avatar Sep 21 '22 03:09 ssometimes

The following error is reported when compiling "from basicsr.losses.losses import r1_penalty" I did not find basicsr.losses.losses in the original GFPGAN project, but basicsr.losses.gan_loss, which can be compiled normally after replacement. When used, the color of the exported photo will be a lot greener

ssometimes avatar Sep 21 '22 10:09 ssometimes

@ssometimes GFPGAN onnx

magicse avatar Sep 21 '22 13:09 magicse

I am trying to convert GFPGANv1 model from pth to onnx format, but I got error.

here is code: for conversion:


import cv2
from basicsr.utils import img2tensor
from torchvision.transforms.functional import normalize
import torch

from gfpgan.archs.gfpganv1_arch import GFPGANv1

model_path = "./experiments/Mayu_Models/subface_net_g_250000.pth"
onnx_path = "./experiments/deploy/GFPGAN_v1.onnx"

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

inference_model = GFPGANv1(
        out_size=512,
        num_style_feat=512,
        channel_multiplier=1,
        decoder_load_path=None,
        fix_decoder=False,
        num_mlp=8,
        input_is_latent=True,
        different_w=True,
        narrow=1,
        sft_half=True).to(device)

loadnet = torch.load(model_path)
if 'params_ema' in loadnet:
    keyname = 'params_ema'
else:
    keyname = 'params'
inference_model.load_state_dict(loadnet[keyname], strict=False)
inference_model = inference_model.eval()

img_path = './inputs/cropped_faces/1.png'
input_img = cv2.imread(img_path, cv2.IMREAD_COLOR)
img = cv2.resize(input_img, (512, 512))
cropped_face_t = img2tensor(img / 255., bgr2rgb=True, float32=True)
normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
cropped_face_t = cropped_face_t.unsqueeze(0).to(device)

mat1 = torch.randn(3, 512, 512).cpu()  # moving the tensor to cpu
mat1 = mat1.unsqueeze(0).to(device)

torch.onnx.export(inference_model,  # model being run
                    (cropped_face_t),  # model input (or a tuple for multiple inputs)
                    onnx_path,  # where to save the model (can be a file or file-like object)
                    export_params=True,  # store the trained parameter weights inside the model file
                    opset_version=11,  # the ONNX version to export the model to
                    do_constant_folding=True,  # whether to execute constant folding for optimization
                    verbose=True,
                    input_names=['input'],  # the model's input names
                    output_names=['out_ab']  # the model's output names
                    )

print("export GFPGANv1 onnx done.")

here is error:


============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last):
  File "convert_onnx.py", line 53, in <module>
    torch.onnx.export(inference_model,  # model being run
  File "/home/ubuntu/anaconda3/envs/gfg/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export
    _export(
  File "/home/ubuntu/anaconda3/envs/gfg/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/home/ubuntu/anaconda3/envs/gfg/lib/python3.8/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/home/ubuntu/anaconda3/envs/gfg/lib/python3.8/site-packages/torch/onnx/utils.py", line 607, in _optimize_graph
    _C._jit_pass_peephole(graph, True)

humayun avatar Aug 09 '23 02:08 humayun

may be problems with version of torch...

magicse avatar Aug 09 '23 04:08 magicse