PF-AFN icon indicating copy to clipboard operation
PF-AFN copied to clipboard

Support inference on a CPU

Open tommy19970714 opened this issue 3 years ago • 2 comments

Thanks for your great work!

I have ran your code and the result was so good and the execution speed was quite fast. I actually compared the speed and it was as shown below.

warp_model inference time: 0.016576[sec] gen_model inference time: 0.007718[sec]

Therefore, it has the potential to be used in real time on mobile and other devices. I have successfully run gen_model on cpu, but warp_model was not possible due to unimplemented parts only on CPU.

There are the following two unimplemented parts of CPU inference. https://github.com/geyuying/PF-AFN/blob/50f440b2c103b287194cfb67d4d42396cf3905c0/PF-AFN_test/models/correlation/correlation.py#L331

https://github.com/geyuying/PF-AFN/blob/50f440b2c103b287194cfb67d4d42396cf3905c0/PF-AFN_test/models/correlation/correlation.py#L385

Is it possible to support CPU inference for warp_model?

You are using cupy_launch, but perhaps the following issue might be helpful. https://github.com/sniklaus/pytorch-pwc/issues/39

tommy19970714 avatar May 05 '21 07:05 tommy19970714

I am also interested in running it on CPU only

Adeel-Intizar avatar Jun 07 '21 09:06 Adeel-Intizar

I found that those two lines actually don't affect the inference on CPU. I make reference on CPU by the following steps:

You can directly remove three things in correlation.py:
1. remove 'import cupy'
2. remove 
 @cupy.util.memoize(for_each_device=True)
 def cupy_launch(strFunction, strKernel):
 	return cupy.cuda.compile_with_cache(strKernel).get_function(strFunction)
3. remove 'raise NotImplementedError() :'
4. change all .cuda() to .to(device) in all files       
    device = torch.device("cpu")

In this way I can make inference on my CPU-only laptop.

Hope it helps you.

Charlie839242 avatar Dec 14 '21 15:12 Charlie839242