Transformers-Tutorials icon indicating copy to clipboard operation
Transformers-Tutorials copied to clipboard

How to decrease inference time of LiLT?

Open piegu opened this issue 2 years ago • 3 comments

Hi,

I'm using Hugging Face libraries in order to run LiLT. How can I decrease inference time? Which code to use?

I've already try BetterTransformer (Optimum) and ONNX but none of them accepts LiLT model.

  • BetterTransformer: NotImplementedError: The model type lilt is not yet supported to be used with BetterTransformer.
  • ONNX: KeyError: "lilt is not supported yet.

Thank you.

Note: I asked this question here, too: https://github.com/jpWang/LiLT/issues/42

piegu avatar Apr 29 '23 09:04 piegu

Issue opened in the Optimum library: https://github.com/huggingface/optimum/issues/1024

piegu avatar May 02 '23 09:05 piegu

Have you considered making a smaller model? What is your model size?

bkocis avatar Jun 27 '23 07:06 bkocis

One thing you can try (especially if you're using a multilingual model like https://huggingface.co/nielsr/lilt-xlm-roberta-base), then you can remove token embeddings of tokens of languages that you don't need.

See this blog post for more info: https://medium.com/@coding-otter/reduce-your-transformers-model-size-by-removing-unwanted-tokens-and-word-embeddings-eec08166d2f9

NielsRogge avatar Jul 03 '23 08:07 NielsRogge