server icon indicating copy to clipboard operation
server copied to clipboard

onnxruntime: `--no-container-build` not honored

Open fgervais opened this issue 2 years ago • 4 comments

Description When building with the --no-container-build flag and --backend=onnxruntime, the build will still try to build onnxruntime with docker.

[ 46%] Building ONNX Runtime
../tools/gen_ort_dockerfile.py --ort-build-config="Release" --triton-container="nvcr.io/nvidia/tritonserver:22.07-py3-min" --ort-version="1.12.0" --trt-version="" --onnx-tensorrt-tag="" --output=Dockerfile.ort
docker build --cache-from=tritonserver_onnxruntime --cache-from=tritonserver_onnxruntime_cache0 --cache-from=tritonserver_onnxruntime_cache1 -t tritonserver_onnxruntime -f ./Dockerfile.ort /build/citritonbuild/onnxruntime
make[2]: docker: Command not found
make[2]: Leaving directory '/build/citritonbuild/onnxruntime/build'
make[2]: *** [CMakeFiles/ort_target.dir/build.make:74: onnxruntime/lib/libonnxruntime.so] Error 127
make[1]: *** [CMakeFiles/Makefile2:145: CMakeFiles/ort_target.dir/all] Error 2
make[1]: Leaving directory '/build/citritonbuild/onnxruntime/build'
make: *** [Makefile:136: all] Error 2
Building Triton Inference Server
platform linux
machine x86_64
version 2.24.0
build dir /build/citritonbuild
install dir /tritonserver
cmake dir /build
default repo-tag: r22.07
container version 22.07
upstream container version 22.07
endpoint "http"
endpoint "grpc"
backend "onnxruntime" at tag/branch "r22.07"
backend "onnxruntime" CMake override "-DTRITON_ENABLE_ONNXRUNTIME_OPENVINO=OFF"
backend "onnxruntime" CMake override "-DTRITON_ONNXRUNTIME_DOCKER_BUILD=OFF"
component "common" at tag/branch "r22.07"
component "core" at tag/branch "r22.07"
component "backend" at tag/branch "r22.07"
component "thirdparty" at tag/branch "r22.07"
error: build failed
The command '/bin/sh -c ./build.py 	-v 	-j1 	--no-container-build 	--enable-logging 	--cmake-dir=$(pwd) 	--build-dir=$(pwd)/citritonbuild 	--install-dir=/tritonserver 	--endpoint=http 	--endpoint=grpc 	--backend=onnxruntime 	--override-backend-cmake-arg=onnxruntime:TRITON_ENABLE_ONNXRUNTIME_OPENVINO=OFF 	--override-backend-cmake-arg=onnxruntime:TRITON_ONNXRUNTIME_DOCKER_BUILD=OFF' returned a non-zero code: 1

Triton Information

2.24.0

Are you using the Triton container or did you build it yourself?

I am building myself.

To Reproduce

./build.py \
	-v \
	-j1 \
	--no-container-build \
	--enable-logging \
	--cmake-dir=$(pwd) \
	--build-dir=$(pwd)/citritonbuild \
	--install-dir=/tritonserver \
	--endpoint=http \
	--endpoint=grpc \
	--backend=onnxruntime \
	--override-backend-cmake-arg=onnxruntime:TRITON_ENABLE_ONNXRUNTIME_OPENVINO=OFF \
	--override-backend-cmake-arg=onnxruntime:TRITON_ONNXRUNTIME_DOCKER_BUILD=OFF

Expected behavior

I'm expecting the backend to build without trying to run docker as requested by the build parameter.

fgervais avatar Aug 05 '22 00:08 fgervais

It can get little confusing. --no-container-build would mean triton is not built over docker container. The individual backends(like onnxruntime backend in this case), however are not invoked by that setting. To actually avoid using docker container within ORT backend script, we need a way in build.py to let the users point to precompiled ORT libraries and include paths.

Is using a docker container within backend build a blocking issue for you?

@GuanLuo for viz.

tanmayv25 avatar Aug 05 '22 18:08 tanmayv25

I see,

Yes on our side it is blocking as it is required by how our build pipelines are setup which are building Triton for all our embedded platforms.

fgervais avatar Aug 05 '22 18:08 fgervais

So the first step on my side would be to build this project myself as a standalone thing?

fgervais avatar Aug 05 '22 18:08 fgervais

Yes. You can take a look at this issue for the same: https://github.com/triton-inference-server/onnxruntime_backend/issues/65

You can build rest of the triton using build.py, then copy the ort backend built without docker.

tanmayv25 avatar Aug 05 '22 19:08 tanmayv25

Closing issue due to lack of activity. Please re-open the issue if you would like to follow up with this.

krishung5 avatar Sep 09 '22 20:09 krishung5