Wang Xinyu

Results 169 comments of Wang Xinyu

Do we support X2 in this repo? I thought we only support this one https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth

Did you try the official trained .pt model? https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5#different-versions-of-yolov5

Please check the readme. The last cli argument means the model type is s, m or l.

Like this: ``` ./yolov5 -s yolov5s.wts yolov5s.engine s ./yolov5 -s yolov5m.wts yolov5m.engine m ./yolov5 -s yolov5l.wts yolov5l.engine l ```

Latency includes preprocessing, inference and postprocessing, in milliseconds. Tested on P40, TensorRT8.4. | Model | Latency(CPU preprocessing) | Latency(CUDA preprocessing) | Optimization | |:---- |:----- |:----- |:----- | | yolov5s...

This CUDA preprocessing for YOLO is using warp affine method to do resizing, which is slightly different from cv::resize(). Hence the mAP is slightly different. Below mAP(IoU=0.50:0.95 | area=all) results...

Yolov5s Predict() latency(P40, TRT 8.4.3.1, 640x640): Currently, when UseCudaPreprocessing & EnablePinnedMemory 41ms After moving output tensors to class member, which can avoid reallocating output buffers. 25ms After using external stream,...

PPClas End2end test on T4, TRT8.4 Latency in ms. Version | Model | FP32 | FP16 | INT8 -- | -- | -- | -- | -- 0.3.0 | PP-LCNetv2...

PPDet End2end test on T4, TRT8.4 Latency in ms. . | Model | FP32 -- | -- | -- Before | PPYOLOE | 35.05 After | PPYOLOE | 32.97 Inference...

PPSeg UNet PPInfer GPU backend, T4. Latency in ms. | Input size | Preprocess latency, before | after | | -- | -- | -- | | 2048x1024 | 33.921...