Jian Lin
Jian Lin
> Why not use DeepStream? I think might TRT is more popular and DeepStream is also use TRT as inference engine
重新开启了这个issue,希望有同样问题的同学可以看到。
> 补充: Tensorrt 8.2.5.1 不会出现任何警告 但是 Tensorrt 8.2 和 Tensorrt 8.4 量化的模型不能通用 (8.4下生成的模型无法用 8.2 的依赖推理) 测试下 8.4 生成的模型体积较小 (体积大约是8.2生成的86.8%), 并且最终推理速度快大约 7.5% 测试的模型为 yolov6n 416*416, fp32, 显卡为 GTX 1660 ti max...
> Traceback (most recent call last): > File "train.py", line 184, in > if **name** == '**main**': YoloTrain().train() > File "train.py", line 144, in train > for train_data in pbar:...
> Hey All, Thanks, I've fixed the error. in the new version ,this bug is seemed not fix
> 目标检测, 使用tensorrt 推理, 目前模型是离线主机上无法提供
> 这边我们会尽快支持这个linspace OP的开发,另外Inference可以很轻松的支持TensorRT部署量化模型,也可以尝试一下是否满足你的部署需求。https://www.paddlepaddle.org.cn/inference/product_introduction/inference_intro.html 目前主要还是ONNX->TensorRT的技术路线代码量比较少,在Jetson上也更轻量,感谢您的建议
> I don't quite get what your problem is, can you elaborate on it? such as the code ``` # output [1, 8400, 85] # slice boxes, obj_score, class_scores strides...
> python export.py -o yolov5n.onnx -e yolov5n.trt --end2end Namespace(calib_batch_size=8, calib_cache='./calibration.cache', calib_input=None, calib_num_images=5000, conf_thres=0.4, end2end=True, engine='yolov5n.trt', iou_thres=0.5, max_det=100, onnx='yolov5n.onnx', precision='fp16', verbose=False, workspace=1) [TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated...
> Dear authors, > > Based on your code, I can cmake and make it, however, when I ran ./yolo and got the following error: > > model size: 88806240...