deepparse icon indicating copy to clipboard operation
deepparse copied to clipboard

Export to ONNX

Open ml5ah opened this issue 2 years ago • 15 comments

Is your feature request related to a problem? Please describe. A script to convert the Address Parser (.ckpt) model to ONNX (.onnx)?

Describe the solution you'd like Has someone successfully converted the address parser model to onnx format?

ml5ah avatar Aug 04 '22 17:08 ml5ah

Thank you for you interest in improving Deepparse.

github-actions[bot] avatar Aug 04 '22 17:08 github-actions[bot]

Hi @ml5ah,

I only used ONNX once, and it was not a successful experience (it was my initial idea to handle Deepparse weights).

If I recall right, the bottleneck was that you had to set the batch size to a specific size; thus, it is cumbersome to find an appropriate one. However, it was back in 2019 or so, and things might have evolved since. I will take a look at it.

So the best case would be some export method for an AddressParser to export itself into an ONNX format, right?

davebulaval avatar Aug 04 '22 18:08 davebulaval

I've looked at PyTorch doc, and it still seems like you need to provide a batch-a-like dataset for the export.

If you come up with a method that can export the AddressParser.model (model) attribute into an ONNX format, I will be more than happy to merge it into a PR. Otherwise, I don't find ONNX helpful and will only provide a new save_address_parser_weights method to save the model's weights in the next release.

davebulaval avatar Aug 04 '22 18:08 davebulaval

Thanks for the reply, @davebulaval!

Yes, that's correct - would be the best way. I have been trying to work with the inbuilt export function in torch but keep running into some issues. The export call works but having trouble initializing an inference session in onnxruntime.

Fixing the batch_size to be 1 should be good as well, for starters!

Do share any insights/suggestions. Thanks!

FYI - my error:

Screen Shot 2022-08-04 at 11 14 26

ml5ah avatar Aug 04 '22 18:08 ml5ah

@davebulaval saw your updated reply - got it, that makes sense. Sure, I'll keep you posted if I have any success.

ml5ah avatar Aug 04 '22 18:08 ml5ah

It seems like a float typing error (it converts some into a float and others into a long float). LSTM parameters are LongTensor, and it may be there the problem.

davebulaval avatar Aug 04 '22 18:08 davebulaval

@ml5ah this post might be useful https://stackoverflow.com/questions/57299674/trouble-converting-lstm-pytorch-model-to-onnx.

davebulaval avatar Aug 04 '22 18:08 davebulaval

@ml5ah I've just added the save_model_weights method to the AddressParser class into dev. It saves the PyTorch state dictionary into a pickle format.

If you need to use the model in another ML framework or code base (e.g. Java, etc.), you can 'simply' load the weights matrix. Usually, it is convenient, but you might need some naming/format conversion.

davebulaval avatar Aug 04 '22 20:08 davebulaval

Thanks @davebulaval! That function helped and I was able to move forward, albeit faced some more roadblocks along the way.

I faced 2 problems:

  1. The size of input tensor is variable based on the number of "words" in the input address text. This impacts the decomposition lengths input as well. Solved this problem temporarily to unblock but not sure if ONNX can handle it.

  2. While exporting, there is an operator "resolve_conj" that is used. Looks like this is not currently supported in any opset version. Documented here: https://github.com/pytorch/pytorch/issues/73619 Might have to wait a bit for this to be supported.

ml5ah avatar Aug 08 '22 17:08 ml5ah

@ml5ah I see. Yeah, it seems like it is not for now.

And what exactly is your objective? In which language are you trying to import?

davebulaval avatar Aug 08 '22 17:08 davebulaval

@davebulaval objective is to deploy the pre-trained address parser model for inference using onnxruntime (in either python or java). To do this, I've been trying to convert the model to onnx using python.

ml5ah avatar Aug 08 '22 18:08 ml5ah

Ok, I got it. Do you want the address parser as an API-like service?

davebulaval avatar Aug 08 '22 18:08 davebulaval

Yep, exactly. With the constraint that inference is using onnxruntine with no dependency on PyTorch.

ml5ah avatar Aug 09 '22 04:08 ml5ah

Keep us updated on your progress. I would love to have 1) the script for ONNX conversion and 2) the script to bundle it into an API. It would be a great doc improvement to have that.

davebulaval avatar Aug 09 '22 14:08 davebulaval

This issue is stale because it has been open 60 days with no activity.
Stale issues will automatically be closed 30 days after being marked Stale
.

github-actions[bot] avatar Oct 09 '22 00:10 github-actions[bot]

@ml5ah hey, im curious if you managed to create an onnx export? If yes it would be great if you could share your insights.

kleineroscar avatar Apr 26 '23 22:04 kleineroscar

The last time I checked, my bottleneck with ONNX was that batch size needed to be fixed beforehand.

davebulaval avatar Apr 27 '23 10:04 davebulaval

@kleineroscar @davebulaval apologies for the late reply. Its been while since I actively looked at this issue. ONNX does provide support for dynamic batch size using the dynamic-axes feature. But, as far as I remember, it did not work out of the box for deepparse till sometime back.

I will give it another shot - hopefully things have changed.

ml5ah avatar May 23 '23 17:05 ml5ah

Took another look, this doesn't seem to be a trivial problem. https://github.com/pytorch/pytorch/issues/28423 is another issue that'll need to be solved before the parser model can be exported to onnx. I also tried dealing with the embedding, encoder and decoder models separately -- non-trivial.

Please share your thoughts.

cc: @kleineroscar @davebulaval

ml5ah avatar May 24 '23 23:05 ml5ah

The embedding conversion is done outside of the model. Thus, the LSTM expects to receive a 300d vector. I think for simplicity, we could fix a batch size, but it would require doing example padding if the number of addresses to parse is smaller than the batch size. I am not a fan of ONNX. I found it to be too rigid.

I've added functionality to allow someone to put the weights in an S3 bucket and process data in cloud service. I could be a workaround to create an API in Python.

I have a friend working on Burn a Rust Torch-like framework, but LSTM is not yet implemented. I would prioritize using a Rust implementation of Deepparse rather than working on/with ONNX.

davebulaval avatar May 25 '23 14:05 davebulaval