glow icon indicating copy to clipboard operation
glow copied to clipboard

Loading saved torchscript model with PytorchModelLoader

Open debayan-gh opened this issue 5 years ago • 4 comments

@jackm321

Is there a way to load torchscript traced files from disk and compile it using the PytorchModelLoader without using the Python torch_glow module. There existed a PytorchFileLoader as part of torch_glow, but was removed in #4866

Can we reintroduce this loadJITGraphForOnnxTraining() preferably with a different name? I can raise a PR for this.

e.g invocation (snippet based on the PytorchFileLoader code)

  auto method = module->get_method("forward");
  auto graphAndTensors =
      torch::jit::LowerGraph(*method.graph(), module->_ivalue());

  auto graph = graphAndTensors.first;
  //optimizations..
  
  PyTorchModelLoader::loadJITGraphWithParameters(
      F, *graphAndTensors.first, inputs, graphAndTensors.second,
      inputPlaceholders, outputPlaceholders)

This loader will try to load a fully supported graph and will bail out if any of the ops is not supported. This can help standalone C++ applications to compile and run completely supported torchscript models for a specific glow backend without the complexity of creating Glow Fusion Node(s) and avoiding much of the torch_glow JIT execution path.

Thanks

debayan-gh avatar Oct 27 '20 17:10 debayan-gh

Hi @jackm321 , @jfix71 ,

Raised a PR for this. Please advise if we can add this constructor or if there is a better way to handle the above use case.

debayan-gh avatar Oct 31 '20 10:10 debayan-gh

If the whole graph is lowerable, what's the difference of this and using fusion?

yinghai avatar Nov 03 '20 20:11 yinghai

We're interested in supporting a solution which uses pyTorchModelLoader directly, and doesn't require going through torch_glow or involves the JIT interpreter. This would allow a standalone application (for example, image classifier) to load torchscript files and compile them - assuming, of course that all operations in the model are supported (or fail if not).

Such support exists for the other model loader - ONNXModelLoader, Caff2ModelLoader. Making it possible for the pyTorchModelLoader will put it on par with the existing ones.

The proposed changes might be a solution for the above - where we are dealing with only parameters and inputs and not involving the JIT interpreter. This does not affect the current PytorchModelLoader path for torch_glow.

debayan-gh avatar Nov 04 '20 10:11 debayan-gh

@jackm321 can you take a look?

yinghai avatar Nov 04 '20 21:11 yinghai