server
server copied to clipboard
Docker build fails because of maybe-uninitialized warning
Description I am trying to build a triton docker image following the https://github.com/triton-inference-server/server/blob/r23.07/docs/customization_guide/build.md#building-with-docker
Using the build command
python ./build.py --target-platform linux --target-machine x86_64 --build-type=MinSizeRel --version 2.36.0 --enable-gpu --endpoint=grpc --endpoint=http --backend=pytorch
Building fails complaining about a maybe-uniitialized
warning during linking.
I think this could be related tot he issue: https://github.com/triton-inference-server/server/issues/5643
Error
47%] Linking CXX executable multi_server
/usr/bin/cmake -E cmake_link_script CMakeFiles/multi_server.dir/link.txt --verbose=0
make[5]: Leaving directory '/tmp/tritonbuild/tritonserver/build/triton-server'
[ 47%] Built target memory_alloc
make[5]: Leaving directory '/tmp/tritonbuild/tritonserver/build/triton-server'
[ 47%] Built target multi_server
/tmp/tritonbuild/tritonserver/build/triton-server/_deps/repo-backend-src/src/backend_input_collector.cc: In member function 'bool triton::backend::BackendInputCollector::FlushPendingPinned(char*, size_t, TRITONSERVER_MemoryType, int64_t)':
/tmp/tritonbuild/tritonserver/build/triton-server/_deps/repo-backend-src/src/backend_input_collector.cc:680:77: error: 'pinned_memory_type' may be used uninitialized in this function [-Werror=maybe-uninitialized]
680 | CommonErrorToTritonError(triton::common::AsyncWorkQueue::AddTask(
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
681 | [this, offset, pinned_memory, pinned_memory_type,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
682 | pending_pinned_byte_size, pinned_memory_type_id, pending_it,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
683 | end_it, incomplete_count, &deferred_pinned]() mutable {
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
684 | for (; pending_it != end_it; pending_it++) {
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
685 | SetInputTensor(
| ~~~~~~~~~~~~~~~
686 | "pinned async H2H", *pending_it, pinned_memory,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
687 | pending_pinned_byte_size, pinned_memory_type,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
688 | pinned_memory_type_id, offset,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
689 | TRITONSERVER_MEMORY_CPU_PINNED, false, false);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
690 | offset += pending_it->memory_desc_.byte_size_;
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
691 | }
| ~
692 | // The last segmented task will start the next phase of
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
693 | // the internal pinned buffer copy
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
694 | if (incomplete_count->fetch_sub(1) == 1) {
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
695 | #ifdef TRITON_ENABLE_GPU
| ~~~~~~~~~~~~~~~~~~~~~~~~
696 | if (buffer_ready_event_ != nullptr) {
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
697 | cudaEventSynchronize(buffer_ready_event_);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
698 | buffer_ready_event_ = nullptr;
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
699 | }
| ~
700 | #endif // TRITON_ENABLE_GPU
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
701 | completion_queue_.Put(deferred_pinned.Finalize(stream_));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
702 | delete incomplete_count;
| ~~~~~~~~~~~~~~~~~~~~~~~~
703 | }
| ~
704 | }));
| ~~
cc1plus: all warnings being treated as errors
make[5]: *** [_deps/repo-backend-build/CMakeFiles/triton-backend-utils.dir/build.make:90: _deps/repo-backend-build/CMakeFiles/triton-backend-utils.dir/src/backend_input_collector.cc.o] Error 1
make[5]: *** Waiting for unfinished jobs....
make[5]: Leaving directory '/tmp/tritonbuild/tritonserver/build/triton-server'
make[4]: *** [CMakeFiles/Makefile2:1012: _deps/repo-backend-build/CMakeFiles/triton-backend-utils.dir/all] Error 2
make[4]: *** Waiting for unfinished jobs....
Triton Information What version of Triton are you using?
Are you using the Triton container or did you build it yourself?
To Reproduce Steps to reproduce the behavior.
- Clone: https://github.com/triton-inference-server/server.gi
- Switch to branch origin/r23.07
- run:
python ./build.py --target-platform linux --target-machine x86_64 --build-type=MinSizeRel --version 2.36.0 --enable-gpu --endpoint=grpc --endpoint=http --backend=pytorch
Expected behavior No error during build. A docker image is created.