Stable-Diffusion-ONNX-FP16 icon indicating copy to clipboard operation
Stable-Diffusion-ONNX-FP16 copied to clipboard

Direct ml not supported on linux

Open Fcucgvhhhvjv opened this issue 1 year ago • 0 comments

hi can we not use onnxruntime-gpu for conversion of a model to onnx? On cpu it takes about 30 min and 12gb ram for a model , i tried to change CPUExecutionProvider to cuda execution provider after installing onnxruntime-gpu but it loaded everything on cpu . I am using colab

Fcucgvhhhvjv avatar Aug 05 '23 05:08 Fcucgvhhhvjv