DeepStream-Yolo icon indicating copy to clipboard operation
DeepStream-Yolo copied to clipboard

Segmentation fault (core dumped)

Open formerlya opened this issue 2 years ago • 3 comments

image yolov5 v5.0 tensorrt v5.0 when I use the .wts .engine .cfg made by tensorrt, in tensorrt it test good. ①When I put them into deepstream image and use deepstream-app -c deepstream_app_config.txt image ②When I put them and load libmyplugins.so and use LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config.txt image image Then image …… I don't know why……

formerlya avatar Jul 15 '22 11:07 formerlya

https://github.com/marcoslucianops/DeepStream-Yolo#requirements

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv5.md

marcoslucianops avatar Jul 15 '22 14:07 marcoslucianops

The file generated by tensorrt can't work for deepstream? But, I see version 5.1 deepstream tutorial can realize, 6.0 series can't? jetson+yolov5+tenssort+deepstream6.0's imformation is too little to learn, pure beginner. o(╥﹏╥)o

formerlya avatar Jul 17 '22 10:07 formerlya

It should not work in the latest repo files. Please use the gen_wts_yoloV5.py file to convert the YOLOv5 models.

marcoslucianops avatar Jul 17 '22 12:07 marcoslucianops

Hi, I am having a "segmentation fault" error. This error comes only a few times. Rest of the time the pipeline works fine. The error comes when the pipeline loads again for processing when we send it a message.

[Ubuntu 20.04] [CUDA 11.6 ]

[NVIDIA Driver 510.47.03] [NVIDIA DeepStream SDK 6.1] [GStreamer 1.16.2] [DeepStream-Yolo]

0:00:52.974549997 140 0x449bd00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/instagng_ds_inventory/test2.onnx_b1_gpu0_fp32.engine WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT input:0 3x224x224
1 OUTPUT kFLOAT dropout_1 128
2 OUTPUT kFLOAT dense 29

0:00:52.975675261 140 0x449bd00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/instagng_ds_inventory/test2.onnx_b1_gpu0_fp32.engine 0:00:52.976119747 140 0x449bd00 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 2]: Load new model:test_classifier_config_dli.txt successfully Segmentation fault (core dumped)

Ideally after "nvinference-engine> [UID 2]: Load new model:test_classifier_config_dli.txt successfully" the following should happen.

Deserialize yoloLayer plugin: yolo 0:00:41.326322620 140 0x449bd00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/instagng_ds_inventory/model_b2_gpu0_fp32.engine INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5 0 INPUT kFLOAT data 3x608x608
1 OUTPUT kFLOAT num_detections 1
2 OUTPUT kFLOAT detection_boxes 22743x4
3 OUTPUT kFLOAT detection_scores 22743
4 OUTPUT kFLOAT detection_classes 22743

I believe, because of deserializing, it's giving this error. Please help!!

tuneshverma avatar Oct 06 '22 12:10 tuneshverma

Can you use gdb to debug the segmentation fault?

marcoslucianops avatar Oct 26 '22 20:10 marcoslucianops

您好,我也遇到跟你同样的问题?请问您解决了吗?

lyj201644070230 avatar Jun 30 '23 06:06 lyj201644070230

Hi, no I was not able to solve it.

tuneshverma avatar Jun 30 '23 06:06 tuneshverma

Try using the new ONNX export method.

marcoslucianops avatar Jul 02 '23 14:07 marcoslucianops

Hi! I am also having this problem, only except that I have segmentation fault 9 times out of 10, Is there any way to fix this? I'm using custom yolov8 model, followed the steps in the project(running on DeepStream 6.2)

Bo-Yu-Columbia avatar Jul 05 '23 13:07 Bo-Yu-Columbia

Can you send the output from the terminal?

marcoslucianops avatar Jul 06 '23 01:07 marcoslucianops