GFPGAN-onnxruntime-demo
GFPGAN-onnxruntime-demo copied to clipboard
Can I run batch inference after I convert gfpgan pth with torch2onnx.py?
Hi, is there any chance I can do batch inference with ONNX model, I notice the example is do single imge inference. Does the onnx model generated from torch2onnx.py support batch inference?