redaction_with_deepstream icon indicating copy to clipboard operation
redaction_with_deepstream copied to clipboard

ERROR: Deserialize engine failed and failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so'

Open samaujs opened this issue 4 years ago • 0 comments

Hi,

when the following command is executed : $ sudo ./deepstream-redaction-app -c ./configs/pgie_config_fd_lpd.txt -i ./sample_videos/redaction-A_1_partial.mp4 -o ./sample_videos/redaction-A_1_partial_output.mp4

with the following error and warnings : Opening in BLOCKING MODE ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/redaction_with_deepstream/configs/../fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine open error 0:00:02.667193641 10854 0x557f676f30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/redaction_with_deepstream/configs/../fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed 0:00:02.667510968 10854 0x557f676f30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/redaction_with_deepstream/configs/../fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed, try rebuild 0:00:02.667590940 10854 0x557f676f30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size. INFO: [TRT]: INFO: [TRT]: --------------- Layers running on DLA: INFO: [TRT]: INFO: [TRT]: --------------- Layers running on GPU: INFO: [TRT]: conv1_branch_2 + activation_1/Relu_branch_2, conv1_branch_1 + activation_1/Relu_branch_1, block_1a_conv_1_branch_1 + activation_2/Relu_branch_1, block_1a_conv_1_branch_2 + activation_2/Relu_branch_2, block_1a_conv_2_branch_2, block_1a_conv_2_branch_1, block_1a_conv_shortcut_branch_1 + add_1_branch_1 + activation_3/Relu_branch_1, block_1a_conv_shortcut_branch_2 + add_1_branch_2 + activation_3/Relu_branch_2, block_2a_conv_1_branch_2 + activation_4/Relu_branch_2, block_2a_conv_1_branch_1 + activation_4/Relu_branch_1, block_2a_conv_2_branch_1, block_2a_conv_2_branch_2, block_2a_conv_shortcut_branch_2 + add_2_branch_2 + activation_5/Relu_branch_2, block_2a_conv_shortcut_branch_1 + add_2_branch_1 + activation_5/Relu_branch_1, block_3a_conv_1_branch_1 + activation_6/Relu_branch_1, block_3a_conv_1_branch_2 + activation_6/Relu_branch_2, block_3a_conv_2_branch_2, block_3a_conv_2_branch_1, block_3a_conv_shortcut_branch_1 + add_3_branch_1 + activation_7/Relu_branch_1, block_3a_conv_shortcut_branch_2 + add_3_branch_2 + activation_7/Relu_branch_2, block_4a_conv_1_branch_2 + activation_8/Relu_branch_2, block_4a_conv_1_branch_1 + activation_8/Relu_branch_1, block_4a_conv_2_branch_1, block_4a_conv_2_branch_2, block_4a_conv_shortcut_branch_2 + add_4_branch_2 + activation_9/Relu_branch_2, block_4a_conv_shortcut_branch_1 + add_4_branch_1 + activation_9/Relu_branch_1, conv2d_bbox_branch_1 || conv2d_cov_branch_1, conv2d_cov/Sigmoid_branch_1, output_bbox_branch_2 || output_cov_branch_2, output_cov/Sigmoid_branch_2, output_bbox_branch_2 copy, conv2d_bbox_branch_1 copy, INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. INFO: [TRT]: Detected 1 inputs and 6 output network tensors. 0:00:15.645783186 10854 0x557f676f30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/redaction_with_deepstream/fd_lpd_model/fd_lpd.caffemodel_b1_gpu0_fp32.engine successfully INFO: [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT data 3x270x480
1 OUTPUT kFLOAT output_cov 4x17x30
2 OUTPUT kFLOAT output_bbox 16x17x30

0:00:15.702877860 10854 0x557f676f30 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:./configs/pgie_config_fd_lpd.txt sucessfully Pipeline ready

(deepstream-redaction-app:10854): GStreamer-WARNING **: 21:14:54.103: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so': /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

(deepstream-redaction-app:10854): GStreamer-WARNING **: 21:14:54.120: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstlibav.so': /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 NvMMLiteOpen : Block : BlockType = 4 ===== NVMEDIA: NVENC ===== NvMMLiteBlockCreate : Block : BlockType = 4 H264: Profile = 66, Level = 0 Pipeline running End of stream


Are there any way to correct the above errors? Thank you.

samaujs avatar Dec 14 '20 13:12 samaujs