How to decrease inference time of LiLT?
Hi,
I'm using Hugging Face libraries in order to run LiLT.
How can I decrease inference time? Which code to use?
I've already try BetterTransformer (Optimum) and ONNX but none of them accepts LiLT model.
- BetterTransformer:
NotImplementedError: The model type lilt is not yet supported to be used with BetterTransformer. - ONNX:
KeyError: "lilt is not supported yet.
Thank you.
Note: I asked this question here, too: https://github.com/jpWang/LiLT/issues/42
Issue opened in the Optimum library: https://github.com/huggingface/optimum/issues/1024
Have you considered making a smaller model? What is your model size?
One thing you can try (especially if you're using a multilingual model like https://huggingface.co/nielsr/lilt-xlm-roberta-base), then you can remove token embeddings of tokens of languages that you don't need.
See this blog post for more info: https://medium.com/@coding-otter/reduce-your-transformers-model-size-by-removing-unwanted-tokens-and-word-embeddings-eec08166d2f9