lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[bug] group_size = 64 has bug

Open lippman1125 opened this issue 2 years ago • 5 comments

只修改这两行代码,LLaMa-2-7B 模型无法得到正确的输出。 Generate<64, Ops>(kernels_); group_sizes_.push_back(64);

FP16 的输出: @Input: The first time I saw the movie, I was like, 'Oh my God, _Output: this is so cool.' I was like,

INT4 GroupSize 64 的输出: @Input: The first time I saw the movie, I was like, 'Oh my God, _Output: I was a little, I was a little

lippman1125 avatar Aug 28 '23 17:08 lippman1125

请问下是用的是这个bin模型么?https://huggingface.co/lmdeploy/llama2-chat-7b-w4/blob/main/pytorch_model.bin

转模型的命令是什么?

irexyc avatar Aug 29 '23 07:08 irexyc

校准 HF_MODEL=./llama-2-7b WORK_DIR=../llama-2-7b-awq python3 -m lmdeploy.lite.apis.calibrate
--model $HF_MODEL
--calib_dataset 'c4'
--calib_samples 128
--calib_seqlen 2048
--work_dir $WORK_DIR 量化 HF_MODEL=./llama-2-7b WORK_DIR=../llama-2-7b-awq-64 python3 -m lmdeploy.lite.apis.auto_awq
--model $HF_MODEL
--w_bits 4
--w_group_size 64
--work_dir $WORK_DIR 部署 python3 -m lmdeploy.serve.turbomind.deploy
--model-name llama2
--model-path ../llama-2-7b-awq-64
--model-format awq
--group-size 64
--dst-path ../llama-2-7b-awq-64-deploy

lippman1125 avatar Aug 29 '23 08:08 lippman1125

@irexyc 不是,我用官方模型自己走了一遍转换部署流程。

lippman1125 avatar Aug 29 '23 08:08 lippman1125

@pppppM 和量化有关么?

lvhan028 avatar Aug 29 '23 08:08 lvhan028

@lvhan028 正在测试 g64 的量化

pppppM avatar Sep 05 '23 07:09 pppppM