TensorRT
TensorRT copied to clipboard
tensorrt test time is not stable
Hello, when I use c++ to infer the engine file, when the nvidia driver version is a new version, the inference time is not stable(16-200ms)
But inference time is stable(16ms) when nvidia driver version is old. For example, nvGameS.dll is 27
I also tested different cuda version and trt version, I want to ask what is the reason for this, thanks for your answer.
could you set your GPU to Performance mode?
Where should this be modified?
By modifying the parameters of the nvidia control panel, the test time is still unstable. What is the reason for this?
Could be a driver bug...
Closing since this seems not TRT bug, also for Windows I believe the OS and other app also try to use the GPU resource. Might need enable Windows TCC mode to make the perf stable.