GPU Utilization?
I've been running inference with the provided pre-trained model, but I've noticed that it only runs on the CPU. I attempted to convert the code to run on a GPU; however, I get numerous runtime errors regarding CPU tensors vs GPU tensors. I see that there are several C++ source files included. Does this mean that this implementation of CRF as RNN is not able to run on a GPU, due to the code compiling for the CPU? Or am I missing something in my conversion of your code?
Thanks!
Hi, I have the same question, how to change it to train on a GPU?
Bump!
Any updates here ? did anyone find out how to run on GPU ?
I tried changing the _CPU parameter in filter.py file but it gives Segmentation fault.
I changed both _CPU and the device in Abstract filter class (hard coded as cpu). But this crashes my kernel
I had the same problem... can't change filters.py into cuda type
I don't think gpu is supported for the pytorch version
Anyone tried to merge this implementation: https://github.com/HapeMask/crfrnn_layer ? The author implemented GPU version but I don't have GPU to debug.
Any updates here?
as an alternative, this repo provides an implementation that runs on gpu and batch size > 1:
https://github.com/HapeMask/crfrnn_layer