Brian Pickrell
Brian Pickrell
I just saw this same failure with the `GoogleFnet` model. An mxr file created from `GoogleFnet` is on `hyd-7c-ZT09-02.amd.com`
Thanks. I made the last change Paul requested--the only remaining open conversation was just a question.
@CharlieL7 , I think I have to re-request a review to get this accepted.
Latest update: - I tried pulling and building Triton-inference-server using the Python script mentioned at https://github.com/triton-inference-server/server/blob/main/docs/customization_guide/build.md#building-with-docker No luck; it failed with a not-authorized error which I've asked about. - cloned...
### Note to myself on how I ran an example. This doesn't introduce the execution provider, yet. 1. Get the triton-inference-server repo `git clone [email protected]:triton-inference-server/server.git` 2. Go to the examples...
The above doesn't go in the order of Ted's earlier note. I'm running a prebuilt Docker image of a server before having built my own server.
Here's what the onnxruntime shared libraries look like, as installed in that server Docker I used above: ``` root@home-tower:/opt/tritonserver/backends# ll onnxruntime/ total 507608 drwxrwxrwx 3 triton-server triton-server 4096 Apr 18...
Does your Docker contain a Triton server? I thought this was a replacement for the example docker image that had the server included.
Don't we need `--enable-gpu` ?
Here's a tidbit from the issues page: **By default, if GPU support is enabled, the base image is set to the [Triton NGC min container](https://github.com/triton-inference-server/server/blob/main/build.py#L1487), otherwise [ubuntu:22.04 image](https://github.com/triton-inference-server/server/blob/main/build.py#L1491) is used...