inference icon indicating copy to clipboard operation
inference copied to clipboard

Is support for ROCMExecutionProvider planned?

Open ak1932 opened this issue 1 year ago • 3 comments

Search before asking

  • [X] I have searched the Inference issues and found no similar feature requests.

Description

Support for ROCMExecutionProvider would be great

Use case

This would make roboflow more accessible with users using amd gpu benefiting greatly.

Additional

No response

Are you willing to submit a PR?

  • [ ] Yes I'd like to help by submitting a PR!

ak1932 avatar Jul 09 '24 03:07 ak1932

Onnxruntime has stable onnxruntime-rocm packages (though not on PyPi). Also is there any way I can temporily use ROCMExecutionProvider?

ak1932 avatar Jul 09 '24 03:07 ak1932

We don't plan to officially support this at the moment because we don't have any customers that need/use it and don't have any hardware to test with.

But you should be able to get hardware acceleration on ROCM using the onnxruntime-rocm package by installing it manually. It should work just like the onnxruntime-silicon runtime does for MacOS acceleration.

Once installed, use the ONNXRUNTIME_EXECUTION_PROVIDERS="[ROCMExecutionProvider, CPUExecutionProvider]" environment variable to ensure that it gets picked up & used.

yeldarby avatar Jul 09 '24 03:07 yeldarby

Ohh ok. I'll try it out. Thanks for the help.

ak1932 avatar Jul 09 '24 19:07 ak1932