trafficstars
Description
A clear and concise description of what the bug is.
Thanks for your good job!
When i build the triton server with docker with onnx backend, i meet so many warnings and the pregress is broken in [ 77%] Built target onnxruntime_providers.
Do you still meet these warnings in the building process ?
Why these errors happen?
Triton Information
What version of Triton are you using?
main
Are you using the Triton container or did you build it yourself?
build myself.
To Reproduce
Steps to reproduce the behavior.
I have built the buildbase container.
a. And in the progress of building builder container and installing onnx backend, i got some unexpected errors in Step 37/49 : RUN ./build.sh.
b. We meet so many warnings like the following warnings.
warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
warning: enumeration value ‘AttributeProto_AttributeType_TYPE_PROTO’ not handled in switch [-Wswitch]
/workspace/onnxruntime/cmake/external/onnx/onnx/defs/parser.cc:197:10: note: ‘dblval’ was declared here
/workspace/onnxruntime/cmake/external/protobuf/src/google/protobuf/repeated_field.h:1374:5: warning: ‘floatval’ may be used uninitialized in this function [-Wmaybe-uninitialized]
/workspace/onnxruntime/onnxruntime/core/providers/cuda/tensor/pad.cc:136:35: warning: comparison of integer expressions of different signedness: ‘int32_t’ {aka ‘int’} and ‘std::vector<long int>::size_type’ {aka ‘long unsigned int’} [-Wsign-compare]
c. In the physical machine for building the triton server, we do not install the graphics card like V100 or others. We think it does not matter. Is that right?
partial logs as following:
Step 1/49 : ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.12-py3-min
Step 2/49 : ARG ONNXRUNTIME_VERSION=1.10.0
Step 3/49 : ARG ONNXRUNTIME_REPO=https://github.com/microsoft/onnxruntime
Step 4/49 : ARG ONNXRUNTIME_BUILD_CONFIG=Release
Step 5/49 : ARG ONNXRUNTIME_OPENVINO_VERSION=2021.2.200
Step 6/49 : FROM ${BASE_IMAGE}
Step 8/49 : ENV DEBIAN_FRONTEND=noninteractive
Step 9/49 : RUN sed -i 's/archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
Step 10/49 : RUN sed -i 's/security.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
Step 35/49 : RUN sed -i 's/set_target_properties(onnxruntime PROPERTIES VERSION ${ORT_VERSION})//'
Step 36/49 : ENV CUDACXX="/usr/local/cuda/bin/nvcc"
Step 37/49 : RUN ./build.sh ${COMMON_BUILD_ARGS} --update --build --use_cuda --cuda_home "/usr/local/cuda" --cudnn_home "/usr/local/cudnn-8.3/cuda" --use_tensorrt --tensorrt_home "/usr/src/tensorrt" --use_openvino CPU_FP32
...
[91m/workspace/onnxruntime/cmake/external/pytorch_cpuinfo/deps/clog/src/clog.c: In function ‘clog_vlog_fatal’:
/workspace/onnxruntime/cmake/external/pytorch_cpuinfo/deps/clog/src/clog.c:112:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
112 | write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...
[ 77%] Built target onnxruntime_providers
[91mmake: *** [Makefile:166: all] Error 2
[0m[91mTraceback (most recent call last):
File "/workspace/onnxruntime/tools/ci_build/build.py", line 2362, in
sys.exit(main())
File "/workspace/onnxruntime/tools/ci_build/build.py", line 2282, in main
build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
File "/workspace/onnxruntime/tools/ci_build/build.py", line 1174, in build_targets
run_subprocess(cmd_args, env=env)
File "/workspace/onnxruntime/tools/ci_build/build.py", line 639, in run_subprocess
return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
File "/workspace/onnxruntime/tools/python/util/run.py", line 42, in run
completed_process = subprocess.run(
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/cmake', '--build', '/workspace/build/Release', '--config', 'Release', '--', '-j8']' returned non-zero exit status 2.
[0mThe command '/bin/sh -c ./build.sh ${COMMON_BUILD_ARGS} --update --build --use_cuda --cuda_home "/usr/local/cuda" --cudnn_home "/usr/local/cudnn-8.3/cuda" --use_tensorrt --tensorrt_home "/usr/src/tensorrt" --use_openvino CPU_FP32' returned a non-zero code: 1
make[2]: *** [CMakeFiles/ort_target.dir/build.make:74: onnxruntime/lib/libonnxruntime.so] Error 1
make[2]: Leaving directory '/tmp/tritonbuild/onnxruntime/build'
make[1]: *** [CMakeFiles/Makefile2:143: CMakeFiles/ort_target.dir/all] Error 2
make[1]: Leaving directory '/tmp/tritonbuild/onnxruntime/build'
make: *** [Makefile:136: all] Error 2
error: make install failed
platform linux
machine x86_64
version 2.19.0dev
default repo-tag: main
backend "ensemble" at tag/branch "main"
backend "identity" at tag/branch "main"
backend "repeat" at tag/branch "main"
backend "square" at tag/branch "main"
backend "onnxruntime" at tag/branch "main"
backend "pytorch" at tag/branch "main"
repoagent "checksum" at tag/branch "main"
Building Triton Inference Server
component "common" at tag/branch "main"
component "core" at tag/branch "main"
component "backend" at tag/branch "main"
component "thirdparty" at tag/branch "main"
mkdir: /tmp/tritonbuild/tritonserver/build
cmake ['-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX:PATH=/tmp/tritonbuild/tritonserver/install', '-DTRITON_COMMON_REPO_TAG:STRING=main', '-DTRITON_CORE_REPO_TAG:STRING=main', '-DTRITON_BACKEND_REPO_TAG:STRING=main', '-DTRITON_THIRD_PARTY_REPO_TAG:STRING=main', '-DTRITON_ENABLE_LOGGING:BOOL=ON', '-DTRITON_ENABLE_STATS:BOOL=ON', '-DTRITON_ENABLE_METRICS:BOOL=ON', '-DTRITON_ENABLE_METRICS_GPU:BOOL=ON', '-DTRITON_ENABLE_TRACING:BOOL=ON', '-DTRITON_ENABLE_NVTX:BOOL=OFF', '-DTRITON_ENABLE_GPU:BOOL=ON', '-DTRITON_MIN_COMPUTE_CAPABILITY=6.0', '-DTRITON_ENABLE_MALI_GPU:BOOL=OFF', '-DTRITON_ENABLE_GRPC:BOOL=ON', '-DTRITON_ENABLE_HTTP:BOOL=ON', '-DTRITON_ENABLE_SAGEMAKER:BOOL=OFF', '-DTRITON_ENABLE_VERTEX_AI:BOOL=OFF', '-DTRITON_ENABLE_GCS:BOOL=ON', '-DTRITON_ENABLE_S3:BOOL=ON', '-DTRITON_ENABLE_AZURE_STORAGE:BOOL=ON', '-DTRITON_ENABLE_TENSORFLOW:BOOL=OFF', '-DTRITON_ENABLE_ENSEMBLE:BOOL=ON', '-DTRITON_ENABLE_ONNXRUNTIME:BOOL=ON', '-DTRITON_ENABLE_PYTHON:BOOL=OFF', '-DTRITON_ENABLE_DALI:BOOL=OFF', '-DTRITON_ENABLE_PYTORCH:BOOL=ON', '-DTRITON_ENABLE_OPENVINO:BOOL=OFF', '-DTRITON_ENABLE_FIL:BOOL=OFF', '-DTRITON_ENABLE_FASTERTRANSFORMER:BOOL=OFF', '-DTRITON_ENABLE_TENSORRT:BOOL=OFF', '-DTRITON_ENABLE_NVTX:BOOL=OFF', '-DTRITON_ENABLE_ARMNN_TFLITE:BOOL=OFF', '-DTRT_VERSION=8.2.1.8+cuda11.4.2.006', '-DDALI_VERSION=1.8.0', '/workspace/build']
make server
mkdir: /tmp/tritonbuild/install
cpdir: /tmp/tritonbuild/tritonserver/install -> /tmp/tritonbuild/install
mkdir: /tmp/tritonbuild
git clone of repo "identity_backend" at tag "main"
mkdir: /tmp/tritonbuild/identity/build
we achieve here.
cmake ['-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX:PATH=/tmp/tritonbuild/identity/install', '-DTRITON_COMMON_REPO_TAG:STRING=main', '-DTRITON_CORE_REPO_TAG:STRING=main', '-DTRITON_BACKEND_REPO_TAG:STRING=main', '-DTRITON_ENABLE_GPU:BOOL=ON', '-DTRITON_ENABLE_MALI_GPU:BOOL=OFF', '-DTRITON_ENABLE_STATS:BOOL=ON', '-DTRT_VERSION=8.2.1.8+cuda11.4.2.006', '-DDALI_VERSION=1.8.0', '..']
make install
rmdir: /tmp/tritonbuild/install/backends/identity
mkdir: /tmp/tritonbuild/install/backends/identity
cpdir: /tmp/tritonbuild/identity/install/backends/identity -> /tmp/tritonbuild/install/backends/identity
mkdir: /tmp/tritonbuild
git clone of repo "repeat_backend" at tag "main"
mkdir: /tmp/tritonbuild/repeat/build
we achieve here.
cmake ['-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX:PATH=/tmp/tritonbuild/repeat/install', '-DTRITON_COMMON_REPO_TAG:STRING=main', '-DTRITON_CORE_REPO_TAG:STRING=main', '-DTRITON_BACKEND_REPO_TAG:STRING=main', '-DTRITON_ENABLE_GPU:BOOL=ON', '-DTRITON_ENABLE_MALI_GPU:BOOL=OFF', '-DTRITON_ENABLE_STATS:BOOL=ON', '-DTRT_VERSION=8.2.1.8+cuda11.4.2.006', '-DDALI_VERSION=1.8.0', '..']
make install
rmdir: /tmp/tritonbuild/install/backends/repeat
mkdir: /tmp/tritonbuild/install/backends/repeat
cpdir: /tmp/tritonbuild/repeat/install/backends/repeat -> /tmp/tritonbuild/install/backends/repeat
mkdir: /tmp/tritonbuild
git clone of repo "square_backend" at tag "main"
mkdir: /tmp/tritonbuild/square/build
cmake ['-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX:PATH=/tmp/tritonbuild/square/install', '-DTRITON_COMMON_REPO_TAG:STRING=main', '-DTRITON_CORE_REPO_TAG:STRING=main', '-DTRITON_BACKEND_REPO_TAG:STRING=main', '-DTRITON_ENABLE_GPU:BOOL=ON', '-DTRITON_ENABLE_MALI_GPU:BOOL=OFF', '-DTRITON_ENABLE_STATS:BOOL=ON', '-DTRT_VERSION=8.2.1.8+cuda11.4.2.006', '-DDALI_VERSION=1.8.0', '..']
make install
rmdir: /tmp/tritonbuild/install/backends/square
mkdir: /tmp/tritonbuild/install/backends/square
cpdir: /tmp/tritonbuild/square/install/backends/square -> /tmp/tritonbuild/install/backends/square
mkdir: /tmp/tritonbuild
git clone of repo "onnxruntime_backend" at tag "main"
mkdir: /tmp/tritonbuild/onnxruntime/build
we achieve here.
cmake ['-DTRITON_BUILD_ONNXRUNTIME_VERSION=1.10.0', '-DTRITON_ENABLE_ONNXRUNTIME_TENSORRT:BOOL=ON', '-DTRITON_BUILD_CONTAINER_VERSION=21.12', '-DTRITON_ENABLE_ONNXRUNTIME_OPENVINO:BOOL=ON', '-DTRITON_BUILD_ONNXRUNTIME_OPENVINO_VERSION=2021.2.200', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX:PATH=/tmp/tritonbuild/onnxruntime/install', '-DTRITON_COMMON_REPO_TAG:STRING=main', '-DTRITON_CORE_REPO_TAG:STRING=main', '-DTRITON_BACKEND_REPO_TAG:STRING=main', '-DTRITON_ENABLE_GPU:BOOL=ON', '-DTRITON_ENABLE_MALI_GPU:BOOL=OFF', '-DTRITON_ENABLE_STATS:BOOL=ON', '-DTRT_VERSION=8.2.1.8+cuda11.4.2.006', '-DDALI_VERSION=1.8.0', '..']
make install
error: docker run tritonserver_builder failed
platform linux
machine x86_64
version 2.19.0dev
default repo-tag: main
container version 22.02dev
upstream container version 21.12
backend "ensemble" at tag/branch "main"
backend "identity" at tag/branch "main"
backend "repeat" at tag/branch "main"
backend "square" at tag/branch "main"
backend "onnxruntime" at tag/branch "main"
backend "pytorch" at tag/branch "main"
repoagent "checksum" at tag/branch "main"
buildbase container ['docker', 'build', '--network', 'bridge', '-f', '/tmp/citritonbuild/Dockerfile.buildbase', '--pull', '--cache-from=tritonserver_buildbase', '--cache-from=tritonserver_buildbase_cache0', '--cache-from=tritonserver_buildbase_cache1']
mkdir: /tmp/citritonbuild
buildbase env ['docker', 'run', '--rm', 'tritonserver_buildbase', 'env']
['docker', 'run', '--name', 'tritonserver_builder', '-w', '/workspace', '-v', '/var/run/docker.sock:/var/run/docker.sock', '--env', 'TRITONBUILD_TRT_VERSION=8.2.1.8+cuda11.4.2.006', '--env', 'TRITONBUILD_DALI_VERSION=1.8.0', 'tritonserver_buildbase', 'python3', './build.py', '--build-dir=/tmp/citritonbuild', '--enable-logging', '--enable-stats', '--enable-tracing', '--enable-metrics', '--enable-gpu-metrics', '--enable-gpu', '--filesystem=gcs', '--filesystem=azure_storage', '--filesystem=s3', '--endpoint=http', '--endpoint=grpc', '--repo-tag=common:main', '--repo-tag=core:main', '--repo-tag=backend:main', '--repo-tag=thirdparty:main', '--backend=ensemble', '--backend=identity:main', '--backend=repeat:main', '--backend=square:main', '--backend=onnxruntime:main', '--backend=pytorch:main', '--repoagent=checksum:main', '-v', '--no-container-build', '--version', '2.19.0dev', '--container-version', '22.02dev', '--upstream-container-version', '21.12', '--cmake-dir', '/workspace/build', '--build-dir', '/tmp/tritonbuild', '--install-dir', '/tmp/tritonbuild/install']
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
./build.py --build-dir=/tmp/citritonbuild --enable-logging --enable-stats --enable-tracing --enable-metrics --enable-gpu-metrics --enable-gpu --filesystem=gcs --filesystem=azure_storage --filesystem=s3 --endpoint=http --endpoint=grpc --repo-tag=common:main --repo-tag=core:main --repo-tag=backend:main --repo-tag=thirdparty:main --backend=ensemble --backend=identity:main --backend=repeat:main --backend=square:main --backend=onnxruntime:main --backend=pytorch:main --repoagent=checksum:main -v
Expected behavior
A clear and concise description of what you expected to happen.
I wish to build the triton server successfully!