deepstream_tao_apps
deepstream_tao_apps copied to clipboard
build engine file failed
I want to run frcnn model from model/frcnn/faster_rcnn_resnet10.etlt. However when I run with command ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i ./sample_720p.h264 ,error happend.
nvidia@nvidia-X10SRA:~/zongxp/tlt/deepstream_tlt_apps$ ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i ../deepstream_tlt_apps_bak/sample_720p.h264 Now playing: pgie_frcnn_tlt_config.txt 0:00:00.528579923 10118 0x55ae34a35b00 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files parseModel: Failed to open TLT encoded model file /home/nvidia/zongxp/tlt/deepstream_tlt_apps/./models/frcnn/faster_rcnn_resnet10.etlt ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors. ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API 0:00:00.529707445 10118 0x55ae34a35b00 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed Segmentation fault (core dumped)
I have no changed anything, and my env is cuda10.2,trt 7.0
it's because the bandwidth of this project is out of use, so the models downloaded are not correct, we are now working to fix this.
Thanks for the notification!
We put the model in another place. Please check the "Known issues" in the README page.
Thanks!
@mchi-zg I had had a similar problem, then I saw the README, downloaded the models and worked, thanks