Yang Wei
Yang Wei
You can modify the following code: https://github.com/bytedance/lightseq/blob/master/examples/inference/python/export/huggingface/hf_gpt2_export.py#L165
> Fairseq working fine, but lightseq (same params) throw this error, > [Here I found relevent code](https://github.com/bytedance/lightseq/blob/master/examples/training/fairseq/fs_modules/ls_transformer.py#L23) > > why need MAX_SEQ_LENGTH = 300? > > Can someone help me...
> Hi, i would like to know is there any module in lightseq that can directly convert the transformer model trained in fairseq into the lightseq transformer model (transformer.pb) and...
This script requires Hugging Face transformers library to tokenize the sentence. So you should run 'pip install transformers' firstly.
Thanks for your modification! But we have fixed these issues in another branch before and will merge into master later.
The results of LightSeq Transformer model should include all the beam results. In the following line, you can print ls_res_ids directly to see the complete results. https://github.com/bytedance/lightseq/blob/master/examples/inference/python/test/ls_bart.py#L31
1. If you train without LightSeq and fine-tune using LightSeq, use pre-trained parameters to initialize the LightSeq layers. 2. If you train with LightSeq and fine-tune using LightSeq, directly save...
> 我最近在研究如何加快文本生成速度,发现训练好的模型在推理时GPU利用率约为20%,请问这个是什么原因?我认为如果GPU利用率高文本生成速度应该会更快。 可以试试把模型转成hdf5,然后用LightSeq推理,能快个5倍左右,下面是给的一个例子: https://github.com/bytedance/lightseq/tree/master/examples/inference/python
I came across the same error before, but forget how to solve it. It may because of the incorrect version or environment varibles.
也可以,但是主要考虑到有些插件可能修改过源码bug,并且不用安装了,就传上来了