server
server copied to clipboard
What's the difference when starting tritonserver with `mpirun --allow-run-as-root -n 1 /opt/tritonserver/bin/tritonserver` vs. `/opt/tritonserver/bin/tritonserver` directly?
Description
I am observing a difference in the behavior of TritonServer when starting it with mpirun
compared to starting it directly. Specifically, when I use mpirun --allow-run-as-root -n 1 /opt/tritonserver/bin/tritonserver
, the server runs and inference normally, but when I start it directly with /opt/tritonserver/bin/tritonserver
, I notice higher CPU usage and slower inference speeds.
-
The following is the normally started tritonserver's CPU usage from
top
and process information fromps auxfww
:-
The CPU usage information:
-
The processes information:
-
-
The following is the abnormally started tritonserver's CPU and GPU informations:
-
The CPU usage information:
-
The
nvidia-smi
information: -
The blocked start logs result from the high CPU usage:
-
-
The following is the model files:
-
Notes:
- Whole the configurations of these two runnings are the same, only the start command changes.
- All the models are running with Triton
python backend
.
I am wondering if there are any specific configuration that mpirun --allow-run-as-root -n 1
sets that might be influencing the behavior of TritonServer.
Thank you for your help in clarifying this matter.
Triton Information
To Reproduce
Expected behavior