crfasrnn_pytorch icon indicating copy to clipboard operation
crfasrnn_pytorch copied to clipboard

GPU Utilization?

Open bblakeslee-maker opened this issue 6 years ago • 9 comments

I've been running inference with the provided pre-trained model, but I've noticed that it only runs on the CPU. I attempted to convert the code to run on a GPU; however, I get numerous runtime errors regarding CPU tensors vs GPU tensors. I see that there are several C++ source files included. Does this mean that this implementation of CRF as RNN is not able to run on a GPU, due to the code compiling for the CPU? Or am I missing something in my conversion of your code?

Thanks!

bblakeslee-maker avatar Dec 13 '19 01:12 bblakeslee-maker

Hi, I have the same question, how to change it to train on a GPU?

Robertwyq avatar Apr 08 '20 15:04 Robertwyq

Bump!

aatifjiwani avatar Sep 23 '20 23:09 aatifjiwani

Any updates here ? did anyone find out how to run on GPU ? I tried changing the _CPU parameter in filter.py file but it gives Segmentation fault.

bicycleman15 avatar Nov 13 '20 21:11 bicycleman15

I changed both _CPU and the device in Abstract filter class (hard coded as cpu). But this crashes my kernel

dragonsan17 avatar Dec 30 '20 21:12 dragonsan17

I had the same problem... can't change filters.py into cuda type

bitterhoneyy avatar Jan 04 '21 11:01 bitterhoneyy

I don't think gpu is supported for the pytorch version

dragonsan17 avatar Jan 04 '21 11:01 dragonsan17

Anyone tried to merge this implementation: https://github.com/HapeMask/crfrnn_layer ? The author implemented GPU version but I don't have GPU to debug.

heng-yuwen avatar Mar 13 '21 23:03 heng-yuwen

Any updates here?

chciw avatar Oct 28 '21 14:10 chciw

as an alternative, this repo provides an implementation that runs on gpu and batch size > 1:

https://github.com/HapeMask/crfrnn_layer

lukaszbinden avatar Sep 01 '22 09:09 lukaszbinden