BackgroundMattingV2 icon indicating copy to clipboard operation
BackgroundMattingV2 copied to clipboard

High CPU usage in Webcam inference demo

Open leavittx opened this issue 4 years ago • 2 comments

I've tried the webcam inference demo, and it runs ~30fps on 640x360 resolution on my laptop's Nvidia GTX1050, which is really neat! However the CPU usage is 60-80%, while GPU utilization according to the task manager is only 6-10%. Is that something specific to how the python demo works - i.e. it shouldn't be CPU-intensive at all if properly used in C++ (torchscript)? Really wonder why the GPU usage is that low. Thank you

leavittx avatar Jan 21 '21 20:01 leavittx

Can I know the steps through which you ran the code

taaha827 avatar Mar 17 '21 12:03 taaha827

you can replace return ToTensor()(Image.fromarray(frame)).unsqueeze_(0).cuda() with

datatype = torch.float32
    pic = Image.fromarray(frame)
    img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
    img = img.cuda()
    img = img.view(pic.size[1], pic.size[0], len(pic.getbands()))
    img = img.permute((2, 0, 1)).contiguous()
    tmp = img.to(dtype=datatype).div(255)
    tmp.unsqueeze_(0)
    tmp = tmp.to(datatype)
    return tmp

to reduce CPU usage on image preprocessing. ToTensor seems to eat CPU regardless of framerate or image size.