Paddle icon indicating copy to clipboard operation
Paddle copied to clipboard

paddle inference: can the input already be on GPU, is there any interface provided

Open MuYu-zhi opened this issue 3 years ago • 5 comments

请提出你的问题 Please ask your question

I have put my input on GPU and wanna do inference, but only found input_tensor.copy_from_cpu(), is there any interface to read input data on GPU directly?

Greatly appreciate!

MuYu-zhi avatar Aug 02 '22 10:08 MuYu-zhi

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

paddle-bot[bot] avatar Aug 02 '22 10:08 paddle-bot[bot]

Sorry, due to many considerations, we don't have such an interface for the time being. Please describe the usage scenario in detail for us to discuss. Thank you!

RichardWooSJTU avatar Aug 02 '22 13:08 RichardWooSJTU

I have several preprocess ops executed by GPU, after that, I wanna do inference directly, a pass by GPU address is preferred.

MuYu-zhi avatar Aug 03 '22 02:08 MuYu-zhi

More specifically, a pass by GPU address of Tensor is preferred. Other than where the input is, on CPU or GPU, the type of input should not be limited to np.array.

Hope to have support in the future.

MuYu-zhi avatar Aug 03 '22 03:08 MuYu-zhi

Thank you for your reply, we will discuss seriously.

RichardWooSJTU avatar Aug 03 '22 04:08 RichardWooSJTU

Since you haven't replied for more than a year, we have closed this issue/pr. If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up. 由于您超过一年未回复,我们将关闭这个issue/pr。 若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。

paddle-bot[bot] avatar Aug 15 '23 06:08 paddle-bot[bot]