Stable-Diffusion-ONNX-FP16
Stable-Diffusion-ONNX-FP16 copied to clipboard
Direct ml not supported on linux
hi can we not use onnxruntime-gpu for conversion of a model to onnx? On cpu it takes about 30 min and 12gb ram for a model , i tried to change CPUExecutionProvider to cuda execution provider after installing onnxruntime-gpu but it loaded everything on cpu . I am using colab