DocShadow-ONNX-TensorRT icon indicating copy to clipboard operation
DocShadow-ONNX-TensorRT copied to clipboard

Export onnx on gpu!

Open shubhoppo opened this issue 1 year ago • 0 comments

Hi @fabio-sim, Thanks for the great work. Currently the sd7k model is exported on CPU. While inference when I run the model on GPU it is slower than CPU. I've changed export.py: device = torch.device("cuda:0") # Device on which to export. It is throwing an error that model and dynamic_axes is not on the same device. Could you please check and help me to put the dynamic_axes on cuda.

shubhoppo avatar Dec 26 '23 05:12 shubhoppo