expose icon indicating copy to clipboard operation
expose copied to clipboard

About CPU inferance

Open pramishp opened this issue 3 years ago • 2 comments

thank you for open sourcing this amazing piece of work. I tried running the code on windows 10 without GPU withuse_cuda: false and removing some cuda checks. But, it threw the following error.

    return func(*args, **kwargs)
  File "demo.py", line 281, in main
    full_imgs_list, body_imgs, body_targets = batch
TypeError: cannot unpack non-iterable MemoryPinning object

I also tried running it on my Linux machine with GPU but with use_cuda: true and use_cuda: false . The inference time was almost the same. So, I guess the use_cuda flag is not working.

Can we do the inference on the CPU?

pramishp avatar Aug 05 '21 07:08 pramishp

mark

PandaPandaChen avatar Sep 08 '21 06:09 PandaPandaChen

Hi, Maybe it's late. But I have encountered this issue today. You should set all "pin_memory" parameters to "False" in build.py and demo.py. When the instance from dataloader is created, this parameter is set to True. It improves the training and inference speed when CUDA is available. When using CPU, this parameter should be False.

A-Abedi avatar Apr 20 '22 10:04 A-Abedi