lightseq
lightseq copied to clipboard
Export the weights in normal pytorch stat_dict
Can lightseq export the weights in normal pytorch stat_dict after training using lightseq? It can be used for hugging face transformers or other pytorch lib to load and later use.
- If you train without LightSeq and fine-tune using LightSeq, use pre-trained parameters to initialize the LightSeq layers.
- If you train with LightSeq and fine-tune using LightSeq, directly save the state_dict and load for futher fine-tuning or inference without any modification.
- If you train with LightSeq and want to export for inference using LightSeq inference library, refer to
examples/inference/python/export/ls_fs_transformer_export.py
. We provide an example of training using Fairseq+LightSeq, and exporting to pb format, then use LightSeq inference library to predict.
- If you train without LightSeq and fine-tune using LightSeq, use pre-trained parameters to initialize the LightSeq layers.
- If you train with LightSeq and fine-tune using LightSeq, directly save the state_dict and load for futher fine-tuning or inference without any modification.
- If you train with LightSeq and want to export for inference using LightSeq inference library, refer to
examples/inference/python/export/ls_fs_transformer_export.py
. We provide an example of training using Fairseq+LightSeq, and exporting to pb format, then use LightSeq inference library to predict.
So LightSeq doesn't support 'Traing with LightSeq and Fine-tune or Inference with pytorch or huggingface' ? Is there any plan to support it ?
It can be done by changing LightSeq layer back to pytorch or huggingface layer, just like the way you change pytorch layer to lightseq layer.
- If you train without LightSeq and fine-tune using LightSeq, use pre-trained parameters to initialize the LightSeq layers.
- If you train with LightSeq and fine-tune using LightSeq, directly save the state_dict and load for futher fine-tuning or inference without any modification.
- If you train with LightSeq and want to export for inference using LightSeq inference library, refer to
examples/inference/python/export/ls_fs_transformer_export.py
. We provide an example of training using Fairseq+LightSeq, and exporting to pb format, then use LightSeq inference library to predict.
for reply 1, how to load pre-trained parameters to initialize the lightseq? Is there any samples? Thanks.
@daeing Please You can check the huggingface Bert example.
@daeing Please You can check the huggingface Bert example.
sorry, I have pretrained pytorch transformer-decoder layer. I want to use lightseq to accelerate inference speed. But I don't know how to load pretrained parameters to lightseq decoder layer. You mean I show use hugging face to create transformer decoder layer, and then load pretrained parameters, and then trans hugging face to lightseq?
You can check the inference example and modify the export script for your pretrained model.
You can check the inference example and modify the export script for your pretrained model.
ok, many thanks