SthPhoenix

Results 136 comments of SthPhoenix
trafficstars

I've downloaded it, but still some more info is needed ) What dataset was used for training? Are there any performance metrics available? I haven't seen any info about ir-152...

@MyraBaba unfortunately I can't check your model in near future, but just from benchmarks perspective they seems to be pretty the same. Though I must admit that for later models...

@MyraBaba , I have added support for `yolov5-face`models, you can check it )

> Did you convert to onnx ? I didn’t se the model where is it :) It's covered to onnx and will be automatically downloaded when you switch detector to,...

Docker images are built with CPU version of onnxruntime. It's intended use case is a fallback when no GPU available. You can install onnxruntime-gpu, though in it's latest versions you...

> is there any speed / accuracy difference between trt and onnx ? > > It would be good if we change trt to onnx in deploy.sh (in gpu version)...

You should also provide [cuda execution provider](https://onnxruntime.ai/docs/execution-providers/) argument in latest versions of onnxruntime

To all lines with `onnxruntime.InferenceSession` in onnxrt_backend.py

That's pretty slow, what GPU and models parameter have you used?

Try enabling force_fp16 then, I'm getting around 145-150 img/sec with one worker and 10 client threads with fp16 enabled on rtx2080 super for Stallone.jpg