fer
fer copied to clipboard
Doesn't use GPU
I'm running the package using Google Colab and the GPU is detected and available, however the FER packages doesn't seem to use it. I've run it before on the same setup and it worked using the GPU, but now it doesn't. Is there anyone that has the same problem?
Can you please try enclosing the inference in with tf.device('/device:GPU:0')
and see if it helps?
with tf.device('/device:GPU:0'): detector = FER(mtcnn=True)
Video predictions
video = Video(videofile)
Output list of dictionaries
raw_data = video.analyze(detector, display=False)
On Fri, Jun 18, 2021 at 5:19 PM linguist89 @.***> wrote:
I'm running the package using Google Colab and the GPU is detected and available, however the FER packages doesn't seem to use it. I've run it before on the same setup and it worked using the GPU, but now it doesn't. Is there anyone that has the same problem?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/justinshenk/fer/issues/23, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACOLMZACDPIAD3HKOGBLRGDTTNPYHANCNFSM4653YMWA .
@justinshenk Thanks for the reply. I have tried that, but it still doesn't work. I've scoured the internet forums and it seems that there has been a Colab update and the CUDA and tensorflow version are now different. I think that might be the reason, but I don't know how to fix that.
@justinshenk I'm continuing my comments from this issue since I didn't want to hijack his question. I've got the GPU working, but it seems that it was working the whole time only it uses about 5% of the GPU, so it's only marginally faster than CPU. I found a little bash script that plots the GPU usage
Is there a way to force the code to use more GPU?
Batching the frames and adding a queue would allow speeding it up and increasing GPU usage. It would be great if someone would make a PR for this