PU-Net
PU-Net copied to clipboard
Inference in C++
Hi, I want to load the trained model and run the inference in C++, I am trying to figure out a way to do this. What I could figure out is that I need to build TF with GPU support and generate c++ library to link against. What I dont know is, How Do I make sure the loaded graph will actually run on GPU? (It does when run in python). I am unsure about the custom-ops and whether do I need to do anything with them while inferring in c++. It might sound little irrelevant but If you have any pointers please guide
thanks
Hi, the TF has the C++ interface and the TF operation is written by cuda. So one way may write the testing code in C++ with TF C++ API and call the cuda code. I have no experience and it is only my guess.