Philipp Schmidt

Results 40 comments of Philipp Schmidt

We experience the same issue. This is currently holding us back from making the transition to compose v2 and the cli plugin.

No, disabling buildkit gives the same error. Specifically it gives: ERROR: CUDA initialization failure with error 35. Setting the "default-runtime" in /etc/docker/daemon.json and using compose v1 the same machine can...

P.S.: The initial author of the issue has "nvidia" as the default runtime as well. I don't understand how this doesn't apply to compose v2 if it applies to compose...

To clarify what we tried: compose v2.6 + runc default runtime + deploy>resources>devices>gpu in YML + DOCKER_BUILDKIT=0 docker compose build -> cuda init error compose v1 + nvidia default runtime...

+1 for this use-case

Bitstream support is necessary for use-case in NVIDIA Video Processing Framework. Is this PR stalled? https://github.com/PyAV-Org/PyAV/issues/489 https://github.com/PyAV-Org/PyAV/issues/489#issuecomment-744506297 https://github.com/NVIDIA/VideoProcessingFramework/issues/99 https://github.com/NVIDIA/VideoProcessingFramework/issues/99#issuecomment-744408969

I have more info to share. The above code snippet misses setting the build flags for TritonClient to e.g. enable the grpc build. Doing so (via `SET(...)`) will give a...

```cmake FetchContent_Declare( tritonclient GIT_REPOSITORY https://github.com/triton-inference-server/client GIT_TAG r23.05 ) set(TRITON_ENABLE_CC_GRPC ON) FetchContent_MakeAvailable(tritonclient) # ... target_link_libraries(test PRIVATE TritonClient::grpcclient rt m dl) ``` This will actually pull the repo and start building stuff,...

We have the same issue, as already discussed in #5687. I'm moving the discussion here, because the underlying issue is the same and I would like to focus the attention....

Hello @Tabrizian Is there news on this feature? We have implemented the suggested workaround (one model per gpu by naming the model with a gpu suffix) but still got degraded...