DocShadow-ONNX-TensorRT
DocShadow-ONNX-TensorRT copied to clipboard
Export onnx on gpu!
Hi @fabio-sim, Thanks for the great work. Currently the sd7k model is exported on CPU. While inference when I run the model on GPU it is slower than CPU. I've changed export.py: device = torch.device("cuda:0") # Device on which to export. It is throwing an error that model and dynamic_axes is not on the same device. Could you please check and help me to put the dynamic_axes on cuda.