lightseq
lightseq copied to clipboard
How to use test/ls_quant_bert.py?
When I run the scripts, first meet a error: not has model.bin?
Then, I download bert-base-uncased model (pytorch_model.bin)from huggingface: git lfs install,git clone https://huggingface.co/bert-base-uncased.
I use pytorch_model.bin as ckpt_path(model_name), output errors:
bert.encoder.layer.10.self_attn.out_proj.output_quant: No amax in state_dict.
bert.encoder.layer.10.self_attn.out_proj.weight_quant: No amax in state_dict.
bert.encoder.layer.10.fc1.input_quant: No amax in state_dict.
bert.encoder.layer.10.fc1.output_quant: No amax in state_dict.
bert.encoder.layer.10.fc1.weight_quant: No amax in state_dict.
bert.encoder.layer.10.fc2.input_quant: No amax in state_dict.
bert.encoder.layer.10.fc2.output_quant: No amax in state_dict.
bert.encoder.layer.10.fc2.weight_quant: No amax in state_dict.
bert.encoder.layer.11.self_attn.qkv_proj.input_quant: No amax in state_dict.
bert.encoder.layer.11.self_attn.qkv_proj.output_quant: No amax in state_dict.
bert.encoder.layer.11.self_attn.qkv_proj.weight_quant: No amax in state_dict.
bert.encoder.layer.11.self_attn.out_proj.input_quant: No amax in state_dict.
bert.encoder.layer.11.self_attn.out_proj.output_quant: No amax in state_dict.
bert.encoder.layer.11.self_attn.out_proj.weight_quant: No amax in state_dict.
bert.encoder.layer.11.fc1.input_quant: No amax in state_dict.
bert.encoder.layer.11.fc1.output_quant: No amax in state_dict.
bert.encoder.layer.11.fc1.weight_quant: No amax in state_dict.
bert.encoder.layer.11.fc2.input_quant: No amax in state_dict.
bert.encoder.layer.11.fc2.output_quant: No amax in state_dict.
bert.encoder.layer.11.fc2.weight_quant: No amax in state_dict.
creating lightseq model...
Traceback (most recent call last):
File "test/ls_quant_bert.py", line 204, in
Need to convert the data format to the format of lightseq