Running Inference on CPU
Hello,
I am trying to use run_infer.sh command on my own tiled images. However, my own MacBook (macOS) doesn't support GPU, is there a way to run built on the CPU only?
Best, KS
Hi, we have not tested this but have currently hard-coded it to work on the gpu. Please try changing the line here, to:
net = net.to("cpu")
and let us know how you get on.
You also need to change this https://github.com/vqdang/hover_net/blob/a0f80c7acb9a14964d597d15d7ebd9688a0230cb/models/hovernet/run_desc.py#L176 Also remove this because DataParallel is not applicable https://github.com/vqdang/hover_net/blob/a0f80c7acb9a14964d597d15d7ebd9688a0230cb/infer/base.py#L69
There will be other places too, it will crop up once you try.
@vqdang @simongraham , Thanks for your reply. It works. It is slow so depending on the GPU is a better choice.
Best, Kenong
Keeping this issue open for easy tracking
I would like to know that the modified model is trained on cpu, because I have encountered some problems and need to debug in cpu mode. I have commented on some statements as you said above, but I still encountered some problems, such as'Hovernet object has no attribute module'
Hello team, thanks for this repo! Is there an inference notebook/script available after I trained the model using run.py?