Face-Restoration-TensorRT icon indicating copy to clipboard operation
Face-Restoration-TensorRT copied to clipboard

how to use this in python?

Open debasishaimonk opened this issue 2 years ago • 7 comments
trafficstars

How to integrate the engine file in python, inorder to inference.

debasishaimonk avatar Oct 20 '23 12:10 debasishaimonk

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code.

bychen7 avatar Oct 26 '23 08:10 bychen7

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code.

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.

Fibonacci134 avatar Nov 23 '23 18:11 Fibonacci134

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。

I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.

luhairong11 avatar Dec 08 '23 01:12 luhairong11

@Fibonacci134 请分享用python实现运行model.engine的程序,谢谢!请帮忙发邮箱[email protected]

fanghaiquan1 avatar Dec 25 '23 01:12 fanghaiquan1

i need python infer code , thks,[email protected] @Fibonacci134

ggzzzzz628 avatar Oct 04 '24 19:10 ggzzzzz628

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。

I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.

can you share the python infer code? thanks! [email protected]

lu-xinyuan avatar Dec 24 '24 02:12 lu-xinyuan

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。

I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.

can you share the python infer code? thanks! [email protected]

It's been too long, and I'm no longer working on that area. The code is hard to find now.

luhairong11 avatar Dec 24 '24 02:12 luhairong11