Torch-Pruning
Torch-Pruning copied to clipboard
Is it necessary to transfer model to cpu?
Hello.In torch.pruning/dependency.py there is a line model.eval().cpu()
. With this i cant use model RAFT (optical flow model) which i'm currently researching (it fails on
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
even if i'm transferrng it on cpu myself). But if i'm commenting this mentioned line model.eval().cpu()
then programm passes through
DG.build_dependency(model, example_inputs=[torch.randn(1, 3, 440, 1024), torch.randn(1, 3, 440, 1024)])
just fine. So, is this line model.eval().cpu()
is necessary in torch_pruning? Is torch_pruning works on cpu only?
Thanks in advance.