MLServer icon indicating copy to clipboard operation
MLServer copied to clipboard

GPU support for custom inference runtimes in MLServer

Open koolgax99 opened this issue 1 year ago • 0 comments

I am trying to use GPU in my custom inference endpoint built using MLserver. I am unable to load the model on gpu. Can you please let me know if this is possible or not?

Thank you

koolgax99 avatar Aug 28 '24 20:08 koolgax99