deepstream_tao_apps
deepstream_tao_apps copied to clipboard
The goal is to utilize the poseclassificationnet that I trained in deepstream.
The goal is to utilize the poseclassificationnet that I trained in deepstream.
After learning the pose classification model using tao toolkit pytorch backend git, we created an onnx file for use in deepstream. (https://github.com/NVIDIA/tao_pytorch_backend/tree/main/nvidia_tao_pytorch/cv/pose_classification)
And to run the pose deepstream app After make in the path https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-pose-classification, run ./deepstream-pose-classification-app ../../.. I set up the environment to run /configs/app/deepstream_pose_classification_config.yaml.
After that, I changed the two paths below in the config_infer_third_bodypose_classification.txt file set in deepstream_pose_classification_config.yaml.
Before change (pretrained model works well) #model-engine-file=../../../models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b2_gpu0_fp16.engine #tlt-encoded-model=../../../models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt
After change (the model I learned does not work) model-engine-file=../../../models/poseclassificationnet/pc_model.onnx tlt-encoded-model=../../../models/poseclassificationnet/pc_model.tlt
Error syntax:
root@cfb9dc0c32a9:/home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification# ./deepstream-pose-classification-app ../../../configs /app/deepstream_pose_classification_config.yaml width 1920 height 1080 video rtsp://user1:[email protected]:554/profile2/media.smp video rtsp://user1:[email protected]:554/profile2/media.smp WARNING: Overriding infer-config batch-size (1) with number of sources (2) config_file_path:/home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt Unknown or legacy key specified 'is-classifier' for group [property]
*** nv-rtspsink: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
Now playing!
ERROR: [TRT]: 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /home/ubuntu/deepstream/deepstream-6.4/sources/apps/deepstream_tao_apps/models/poseclassificationnet/pc_model.onnx
0:00:07.399315288 437034 0x55647c102f50 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:
- The tensorrt information on the PC running the deepstream app and the tensorrt installation information on the taotoolkit pytorch backend are the same. And when converting onnx in taotoolkit pytorch backend, onnx is created using the export.py code.