liuhao123t
liuhao123t
after build, it can sucess run in cpu, but not in gpu it will generate the follow errors: `root@4dbec2b03d4e:/ssd/liuhao/yolov5-onnxruntime/build# ./yolo_ort --model_path ../models/yolov5m.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu Inference device: GPU...
it is hard to understand the algorithm without any introduction
i try use protobuf to convey big message, like pointcloud when the convey message is bigger, the publish program can run ,but the subscriber program will gets stuck there after...
cyberrt在进行进程间通信可以使用共享内存么