DALI icon indicating copy to clipboard operation
DALI copied to clipboard

C++ API

Open VisionZQ opened this issue 3 years ago • 5 comments

Hi, I want to use DALI and Tensorrt to accelerate inference with C++! So, I succesfully compile the latest DALI-v1.13 in ubuntu18.04 on on x86_64, then, I try to modify and compile ""MultiDeviceInferencePipeline" project, but there were errors!

/usr/local/include/dali/core/float16.h:25:10: fatal error: dali/util/half.hpp: No such file or directory #include "dali/util/half.hpp" /usr/local/include/dali/core/backend_tags.h:19:10: fatal error: cuda/memory_resource: No such file or directory #include <cuda/memory_resource>

Any suggestion is appraciate!! Thx

VisionZQ avatar Apr 21 '22 02:04 VisionZQ

Hi @VisionZQ, If you want to use DALI for inference you may check DALI dedicated TRITON backend. What you really want to do is mostly create DALI pipeline and just run it, and that C API is sufficient. Please check mentioned project and C API definition for the reference - https://github.com/NVIDIA/DALI/blob/main/include/dali/c_api.h. In the meantime I will check the missing header problem.

JanuszL avatar Apr 21 '22 07:04 JanuszL

BTW - I see that the mentioned header is shipped with DALI, in my case /usr/local/lib/python3.8/dist-packages/nvidia/dali/include/dali/util/half.hpp. Can you check if it is there in your case? Maybe you haven't added the right include directory to your compilation command?

JanuszL avatar Apr 21 '22 07:04 JanuszL

@VisionZQ The path /usr/local/include/dali/core/float16.h suggests that you've tried to install DALI with make install or similar - unfortunately, it's not fully supported as of now. You can, however, install the Python library which should put the C++ headers in the Python installation directory, as mentioned by @JanuszL

Assuming that you have set up your local build with CMake, it would look like:

make -j  # build DALI
pip install dali/python

This should install the Python library along with the headers.

mzient avatar Apr 21 '22 08:04 mzient

Hi @VisionZQ, If you want to use DALI for inference you may check DALI dedicated TRITON backend. What you really want to do is mostly create DALI pipeline and just run it, and that C API is sufficient. Please check mentioned project and C API definition for the reference - https://github.com/NVIDIA/DALI/blob/main/include/dali/c_api.h. In the meantime I will check the missing header problem.

@JanuszL Thx, I will have a try! To be exact, I want to use DALI to do preprocess on GPU, and then feed it directly to TensorRT, so as to speed up the whole inference.

VisionZQ avatar Apr 21 '22 10:04 VisionZQ

Hi @VisionZQ,

To be exact, I want to use DALI to do preprocess on GPU, and then feed it directly to TensorRT, so as to speed up the whole inference.

In this case, C API is sufficient. You can create a DALI pipeline in python, serialize it to a file, then use C API to load the pipeline and just run it.

JanuszL avatar Apr 21 '22 11:04 JanuszL