Paddle
Paddle copied to clipboard
paddle inference: can the input already be on GPU, is there any interface provided
请提出你的问题 Please ask your question
I have put my input on GPU and wanna do inference, but only found input_tensor.copy_from_cpu(), is there any interface to read input data on GPU directly?
Greatly appreciate!
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
Sorry, due to many considerations, we don't have such an interface for the time being. Please describe the usage scenario in detail for us to discuss. Thank you!
I have several preprocess ops executed by GPU, after that, I wanna do inference directly, a pass by GPU address is preferred.
More specifically, a pass by GPU address of Tensor is preferred. Other than where the input is, on CPU or GPU, the type of input should not be limited to np.array.
Hope to have support in the future.
Thank you for your reply, we will discuss seriously.
Since you haven't replied for more than a year, we have closed this issue/pr. If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up. 由于您超过一年未回复,我们将关闭这个issue/pr。 若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。