Increase SERVER_TIMEOUT for L0_infer_valgrind
Related PRs: common: https://github.com/triton-inference-server/common/pull/67 backend: https://github.com/triton-inference-server/backend/pull/67 tensorrt_backend: https://github.com/triton-inference-server/tensorrt_backend/pull/44
It shouldn't take this long (2 days) to load models, right? When did the test start timing out?
@GuanLuo I don't expect the test to take 2 days long either. I set it to 2 days just in case it runs over the TIMEOUT and we need to restart it all over again. In the last run, the test got killed because there is a 8h timeout set in our CI pipeline configuration. I will modify the TIMEOUT once we have a more accurate test duration.
Removed onnx and python backends for valgrind tests. Added a possible memory leak introduced from OpenVINO side. Made some changes in OpenVINO backend.
Rebased
Although L0_infer_valgrind passes locally, it seems like running test_class_bbb test case with TF models on CI would fail due to timeout issue. Increased network_timeout for this case.
L0_infer_valgrind passes on CI: https://gitlab-master.nvidia.com/dl/dgx/tritonserver/-/jobs/42488617