王开开
王开开
RT_Test ``` def predict_by_batch_pip(batch_size, imgs, modeltrt): try: pred_list = [] imgs = torch.stack(imgs) start = time.time() output = modeltrt.apply(imgs) end = time.time() print('batch infer time ::', end - start) global...
``` test data (3558 images) img_size: 880*660 batch_size = 8, thread = 1, gpu =0 ; C++ TensorRT FP32 total_infer_time = 267ms total_time = 69001ms FP16 total_infer_time = 246ms total_time...
I understand what you mean, but the inference time of python tensorRT is still behind that of C++ TensorRT. Is this normal?
Can I compile this library in Windows 10 python3.6? I have tried to compile many times, but all failed. The above test results use the prebuild model that you provided.
Could you please tell me the email address so that we can communicate more conveniently?
Hello. Have you continued to test this speed problem? > Perhaps this gap between C++ vs Python TRT can be solved by providing proper optimization level to CMakeLists.txt. I have...