Maxim Zemlyanikin

Results 2 issues of Maxim Zemlyanikin

Hi, thanks for your great work! What backend did you use to run inference on the device? Have you used PyTorch model as is or converted it to TensorRT?

If we create a model on cpu and run it, than copies of convolutions' weights (weight.original) are saved on cpu. If we run `model.cuda()`, they stay on cpu.

bug