CRAFT-pytorch icon indicating copy to clipboard operation
CRAFT-pytorch copied to clipboard

pytorch vs tensorrt

Open SabraHashemi opened this issue 3 years ago • 7 comments

hi, i tried to get max performance and fps but at the end i found that pytorch on small fixed size is better than tensorrt time for example i had images in sizes about: 150 * 40 to 400 * 200 if i use tensorrt i get 75 * 20 images as result that text boxes are shade and near together but when i used pytorch in fixed size 150*40 i have minimum decrease accuracy with maximum speed, hope this help you , if i am wrong and there is a better way please feel free for any comment or message. thank you.

SabraHashemi avatar Jun 09 '21 21:06 SabraHashemi

Hello, I have the same problem! I use Tesla V100(32G) GPU to inference, and fix input size 312801280. There is no speedup for tensorrt FP32 and FP16. When I use tensorrt int8, the precision、recall is 0. I am so confused.

Besides, for torch model, I found that batch inference does not speedup too. Please help!

ltm920716 avatar Aug 22 '21 11:08 ltm920716

I have another result more likely to yours(perhaps) with Tensorrt, when I gave big images (2000 * 1000) to craft, i could get best performance ( 1/4 exe time) but when my image was small as 200* 400 , then I didn't observe any speedup

but you said you in torch << batch inference does not speedup too>> , what do you mean?

SabraHashemi avatar Aug 29 '21 14:08 SabraHashemi

Hi @sabrabano0 , for the original torch model, the fps of batch1 and batch16 is same

ltm920716 avatar Aug 30 '21 01:08 ltm920716

@sabrabano0 Could you tell me how to convert onnx model file to tensorRT model file ? I have yield a onnx model file but it occurs a mistake when yield a tensorRT model file . Looking forward to hearing from you. Thank you very much.

xuhui1994 avatar Dec 23 '21 02:12 xuhui1994

How do we convert this craft torch model (craft_mlt_25k.pth) to TensorRT format or ONNX format?

kishcs avatar Jul 26 '22 03:07 kishcs

Hello, I have the same problem! I use Tesla V100(32G) GPU to inference, and fix input size 3_1280_1280. There is no speedup for tensorrt FP32 and FP16. When I use tensorrt int8, the precision、recall is 0. I am so confused.

Besides, for torch model, I found that batch inference does not speedup too. Please help!

@SabraHashemi Please can you refer any repository/blog to convert CROFT into tensorrt from pytorch ?

Justsubh01 avatar May 12 '23 12:05 Justsubh01

How do we convert this craft torch model (craft_mlt_25k.pth) to TensorRT format or ONNX format?

HI @kishcs , did you found any solution for it ? if yes then please guide me.

Justsubh01 avatar May 12 '23 12:05 Justsubh01