Face-Restoration-TensorRT
Face-Restoration-TensorRT copied to clipboard
how to use this in python?
How to integrate the engine file in python, inorder to inference.
@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code.
@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code.
Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.
@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。
Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。
I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.
@Fibonacci134 请分享用python实现运行model.engine的程序,谢谢!请帮忙发邮箱[email protected]
i need python infer code , thks,[email protected] @Fibonacci134
@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。
Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。
I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.
can you share the python infer code? thanks! [email protected]
@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。
Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。
I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.
can you share the python infer code? thanks! [email protected]
It's been too long, and I'm no longer working on that area. The code is hard to find now.