Triton-TensorRT-Inference-CRAFT-pytorch icon indicating copy to clipboard operation
Triton-TensorRT-Inference-CRAFT-pytorch copied to clipboard

Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server -...

Results 1 Triton-TensorRT-Inference-CRAFT-pytorch issues
Sort by recently updated
recently updated
newest added

hello, I found that there is speedup using tensorrt(fp32, fp16) inference, is that right? And I found that batch inference for torch model has no speedup too. I do not...