Grople
Grople
> Hi @youwan114, can you share the output of `nvidia-smi` to check the CUDA driver version? I remember we have seen similar issue before, and upgrading the CUDA driver version...
This is my tritonserver log with command ```tritonserver --log-verbose=1 ......``` ```python I0823 03:10:20.056693 14719 http_server.cc:187] Started Metrics Service at 0.0.0.0:8002 I0823 03:10:33.767570 14719 http_server.cc:3452] HTTP request: 2 /v2/models/custom_model/infer I0823 03:10:33.767715...
Continue to test, I have tried to use cpu version of pytorch model (fc_model_pt), the results have achieved, so I do think the problem is locate the CUDA.
For furthermore information: + **CPU info:** ``` I0823 09:45:06.263810 72295 http_server.cc:3372] HTTP request: 2 /v2/models/custom_model/infer I0823 09:45:06.263942 72295 infer_request.cc:729] [request id: 1] prepared: [0x0x7f211c009050] request id: 1, model: custom_model, requested...
Hello @krishung5 , I am very happy to your reply. This is my command to run the docker container, and the above results are got. ```docker run --gpus all -itd...