get output of inference model.
hey, thanks for the nice work. I wonder if there is any way that we can access to the outputs of the Nvinfer directly ? I have trained a custom model in python and it does have a specific custom bbox parse function in Python. Moreover, when I run it in process_mode=1 (primary mode), I get the following error, how should I fix it using Python ?
nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 2]: Failed to parse bboxes
While using process_mode=2, has no error with no prediction.
Are you using an YOLO model?
Yes for now; but I'm looking for a general solution for any custom model.
I can help you with the things related to this repo and models supported.
For general models:
- To access the output tensors from the model directly in Python, you need to use the tensor meta.
- Custom models have different layers and different outputs, you need to create the kernel to "decode" the outputs to the format you want to receive to parse.
Thankyou! but the tensor meta is empty. is that necessary to pass .so file or similar config file ?
You need to use the same config_infer file. The kernel for the yoloLayer is compiled in the so lib.
Ok. Is there any powerful and complete tutorial ?
As far as I know, there isn't. There's no good documentation/turoriais for the DeepStream.