lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[Bug] 使用internlm2-chat-7b 微调后的自制模型,4bit量化后无法使用

Open zhanghui-china opened this issue 1 year ago • 11 comments

Checklist

  • [X] 1. I have searched related issues but cannot get the expected help.
  • [X] 2. The bug has not been fixed in the latest version.

Describe the bug

我自制了一个基于internlm2-chat-7b的食谱模型 zhanghuiATchina/zhangxiaobai_shishen2_full

https://www.modelscope.cn/models/zhanghuiATchina/zhangxiaobai_shishen2_full/summary

运行良好。

现考虑将它进行4bit量化。 但是量化后的模型无法使用

Reproduction

操作过程如下:

conda activate lmdeploynew cd ~/shishen2-full lmdeploy lite auto_awq ./merged --w-bits 4 --w-group-size 128 --work-dir ./merged-4bit --calib-dataset c4 正常完成。生成的目录如下: 3f771049b2afe49f2ef5dfd34e56e62

现在无论是用 xtuner chat还是用 lmdeploy chat都无法正常完成对话:

xtuner verision: 0.1.13 65629cae7460a33e2933dff9c35dae1

xtuner chat现象如下: xtuner chat ./merged-4bit --prompt-template internlm2_chat --temperature 0.8 --top-p 0.8 --repetition-penalty 1.002 启动的时候会卡住半天,然后: 7b45c3955a2ab6f10c714711ba57a0b

f10d8bb7c534965b550111f344ffc89

输入 小笼包怎么做? 就变成这样子了: 44f48a2296f8174863e05143aac7805

xtuner chat ./merged-4bit --bits 4 --temperature 0.8 --top-p 0.8 --repetition-penalty 1.002 --prompt-template internlm2_chat 启动的时候会卡住半天,然后: cb60d04427d390a8fc3c8216c4c926f

63853c0a8aa28d71d72eee90dc04e62

输入 小笼包怎么做? 就变成这样子了: b17b8fd02f74f32275b21dea98deedb

附: 原始模型对话: xtuner chat ./merged --bits 4 --temperature 0.8 --top-p 0.8 --repetition-penalty 1.002 --prompt-template internlm2_chat 55fdfa2f14b585826b1470e2a095a5e

Environment

(lmdeploynew) zhanghui@zhanghui:~/shishen2-full$ lmdeploy check_env
/home/zhanghui/anaconda3/envs/lmdeploynew/lib/python3.10/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
  warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
sys.platform: linux
Python: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.6, V11.6.124
GCC: gcc (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
PyTorch: 2.1.2+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.5  (built against CUDA 11.7)
    - Built with CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

LMDeploy: 0.2.1+
transformers: 4.37.1
gradio: 3.50.2
fastapi: 0.109.0
pydantic: 2.5.3
(lmdeploynew) zhanghui@zhanghui:~/shishen2-full$

Error traceback

No response

zhanghui-china avatar Feb 03 '24 02:02 zhanghui-china

是不是一开始 lmdeploy lite auto_awq ./merged --w-bits 4 --w-group-size 128 --work-dir ./merged-4bit --calib-dataset c4 做4bit量化的过程就错了?

zhanghui-china avatar Feb 04 '24 12:02 zhanghui-china

@zhanghui-china lmdepoy 量化得到的模型,只能用 lmdeploy chat;xtuner chat 输入的模型,只能是原始的 fp16 模型

pppppM avatar Feb 05 '24 07:02 pppppM

谢谢老师指点。

执行如下: lmdeploy lite auto_awq ./merged --w-bits 4 --w-group-size 128 --work-dir ./merged-4bit --calib-dataset c4 22a908f29e0e1d73721b48ec1090180 7c4ce708670ae07673baac4bea97456 生成结果目录: ae26836b2456164ceb2b33455e1c96c

执行lmdeploy chat lmdeploy chat turbomind ./merged-4bit --model-format awq aaca38daeced843a113f3eff5f370d8

323b749a06113ab0f7a2b638eecc04a 5f1744cff5d8e4cef16316efbb75972

原始的模型也是如此: lmdeploy chat turbomind ./internlm2-chat-7b 4fc6e84c76a5aead48f369ab8d480f5 07b74572020c68cd92d0bb3ef8d67f5

而我用xtuner train训练出来的模型,用xtuner chat执行时是这样的结果(我期待的格式是这样的) xtuner chat ./merged --temperature 0.8 --top-p 0.8 --repetition-penalty 1.002 --prompt-template internlm2_chat 4460950c50f25f2cc0ee0972e4f7b1a

请问lmdeploy使用什么参数才能达到 xchat的效果?

zhanghui-china avatar Feb 05 '24 17:02 zhanghui-china

那我换成基于internlm一代模型自制的食谱模型试一下!

执行如下: cd ~/shishen-full lmdeploy lite auto_awq ./merged --w-bits 4 --w-group-size 128 --work-dir ./merged-4bit --calib-dataset c4

3a041347fc787c4b5eb6c8ff317f611 8ce00d791828033a0898a10a5ff2a2b 8786c9e8faceccd562730b3d1c13c17 69ee2cfd49f85a244ecd08aa4e835a3

生成的结果如下 009d358d1b8c6ac8433016b35bb09d7

执行lmdeploy chat lmdeploy chat turbomind ./merged-4bit --model-format awq ef44931de4b004e4d22a5ddf023013d

93c953203082d7aaf4e923263fe10b6

zhanghui-china avatar Feb 06 '24 01:02 zhanghui-china

(我期待的格式是这样的) xtuner chat ./merged --prompt-template internlm_chat 38455e31c088eaebf7773f3fc251f7e

b03d1f715f20552a9273e64433f4468

zhanghui-china avatar Feb 06 '24 02:02 zhanghui-china

(lmdeploy021) zhanghui@zhanghui:~/shishen-full$ lmdeploy chat turbomind ./merged-4bit --model-format awq
/home/zhanghui/anaconda3/envs/lmdeploy021/lib/python3.10/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
  warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
model_source: hf_model
02/06 09:56:57 - turbomind - WARNING - kwargs model_format is deprecated to initialize model, use TurbomindEngineConfig instead.
02/06 09:56:57 - turbomind - WARNING - kwargs tp is deprecated to initialize model, use TurbomindEngineConfig instead.
02/06 09:56:57 - turbomind - WARNING - kwargs cache_max_entry_count is deprecated to initialize model, use TurbomindEngineConfig instead.
02/06 09:56:57 - lmdeploy - WARNING - Can not find tokenizer.json. It may take long time to initialize the tokenizer.
model_config:

[llama]
model_name = base
tensor_para_size = 1
head_num = 32
kv_head_num = 32
vocab_size = 103168
num_layer = 32
inter_size = 11008
norm_eps = 1e-06
attn_bias = 1
start_id = 1
end_id = 2
session_len = 2056
weight_type = int4
rotary_embedding = 128
rope_theta = 10000.0
size_per_head = 128
group_size = 128
max_batch_size = 128
max_context_token_num = 1
step_length = 1
cache_max_entry_count = 0.8
cache_block_seq_len = 128
cache_chunk_size = -1
num_tokens_per_iter = 0
max_prefill_iters = 1
extra_tokens_per_iter = 0
use_context_fmha = 1
quant_policy = 0
max_position_embeddings = 2048
rope_scaling_factor = 0.0
use_logn_attn = 0


get 419 model params
Input chat template with model_name is None. Forcing to use base
[WARNING] gemm_config.in is not found; using default GEMM algo
session 1

double enter to end input >>> 酸菜鱼怎么做?

酸菜鱼怎么做? 02/06 09:57:26 - turbomind - WARNING - kwargs top_p is deprecated for inference, use GenerationConfig instead.
02/06 09:57:26 - turbomind - WARNING - kwargs top_k is deprecated for inference, use GenerationConfig instead.
02/06 09:57:26 - turbomind - WARNING - kwargs temperature is deprecated for inference, use GenerationConfig instead.
02/06 09:57:26 - turbomind - WARNING - kwargs repetition_penalty is deprecated for inference, use GenerationConfig instead.
02/06 09:57:26 - turbomind - WARNING - kwargs ignore_eos is deprecated for inference, use GenerationConfig instead.
02/06 09:57:26 - turbomind - WARNING - kwargs random_seed is deprecated for inference, use GenerationConfig instead.
02/06 09:57:26 - turbomind - WARNING - kwargs request_output_len is deprecated for inference, use GenerationConfig instead.
首先,买一条活鱼,让鱼老板帮忙杀好,回家洗净,沥干,把鱼骨和鱼肉分开,分别剁成小块,用盐,料酒,胡椒粉,腌制20分钟,酸菜洗净,沥干,切成小段,准 备一些干辣椒,花椒,备用。', '锅中倒油,把鱼骨放入锅中煎制,煎制两面金黄,加入水,烧开,熬出奶白色,把鱼骨捞出,放入盘中备用。', '锅中留鱼汤,加 入鱼片,鱼片下锅后,不要搅拌,等鱼片定型,再轻轻搅拌,煮两分钟,捞出,放入盘中。', '锅中留鱼汤,加入酸菜,煮两分钟,加入鱼骨,煮两分钟,加入鱼片 ,煮一分钟,关火,盛盘。', '锅中倒油,加入干辣椒,花椒,炒香,把油淋在鱼片上,放上葱花,就可以上桌了。']

double enter to end input >>> 水煮鱼怎么做?

水煮鱼怎么做? 首先,买一条活鱼,让鱼老板帮忙杀好,回家洗净,沥干,把鱼骨和鱼肉分开,分别剁成小块,用盐,料酒,胡椒粉,腌制20分钟,酸菜洗净,沥干,切成小段,准备一些干辣椒,花椒,备用。', '锅中倒油,把鱼骨放入锅中煎制,煎制两面金黄,加入水,烧开,熬出奶白色,把鱼骨捞出,放入盘中备用。', ' 锅中留鱼汤,加入鱼片,鱼片下锅后,不要搅拌,等鱼片定型,再轻轻搅拌,煮两分钟,捞出,放入盘中。', '锅中留鱼汤,加入酸菜,煮两分钟,加入鱼骨,煮两 分钟,加入鱼片,煮一分钟,关火,盛盘。', '锅中倒油,加入干辣椒,花椒,炒香,把油淋在鱼片上,放上葱花,就可以上桌了。']

double enter to end input >>> EXIT

EXIT

double enter to end input >>> exit

(lmdeploy021) zhanghui@zhanghui:~/shishen-full$
(lmdeploy021) zhanghui@zhanghui:~/shishen-full$

zhanghui-china avatar Feb 06 '24 02:02 zhanghui-china

(xtunernew) zhanghui@zhanghui:~/shishen2-full$ conda deactivate
(base) zhanghui@zhanghui:~/shishen2-full$ conda activate lmdeploy021
(lmdeploy021) zhanghui@zhanghui:~/shishen2-full$ lmdeploy chat turbomind ./merged-4bit --model-format awq
/home/zhanghui/anaconda3/envs/lmdeploy021/lib/python3.10/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
  warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
model_source: hf_model
02/06 10:39:05 - turbomind - WARNING - kwargs model_format is deprecated to initialize model, use TurbomindEngineConfig instead.
02/06 10:39:05 - turbomind - WARNING - kwargs tp is deprecated to initialize model, use TurbomindEngineConfig instead.
02/06 10:39:05 - turbomind - WARNING - kwargs cache_max_entry_count is deprecated to initialize model, use TurbomindEngineConfig instead.
02/06 10:39:05 - lmdeploy - WARNING - Can not find tokenizer.json. It may take long time to initialize the tokenizer.
model_config:

[llama]
model_name = base
tensor_para_size = 1
head_num = 32
kv_head_num = 8
vocab_size = 92544
num_layer = 32
inter_size = 14336
norm_eps = 1e-05
attn_bias = 0
start_id = 1
end_id = 2
session_len = 2056
weight_type = int4
rotary_embedding = 128
rope_theta = 1000000.0
size_per_head = 128
group_size = 128
max_batch_size = 128
max_context_token_num = 1
step_length = 1
cache_max_entry_count = 0.8
cache_block_seq_len = 128
cache_chunk_size = -1
num_tokens_per_iter = 0
max_prefill_iters = 1
extra_tokens_per_iter = 0
use_context_fmha = 1
quant_policy = 0
max_position_embeddings = 32768
rope_scaling_factor = 0.0
use_logn_attn = 0


get 355 model params
Input chat template with model_name is None. Forcing to use base
[WARNING] gemm_config.in is not found; using default GEMM algo
session 1

double enter to end input >>> 小笼包怎么做?

小笼包怎么做? 02/06 10:39:33 - turbomind - WARNING - kwargs top_p is deprecated for inference, use GenerationConfig instead.
02/06 10:39:33 - turbomind - WARNING - kwargs top_k is deprecated for inference, use GenerationConfig instead.
02/06 10:39:33 - turbomind - WARNING - kwargs temperature is deprecated for inference, use GenerationConfig instead.
02/06 10:39:33 - turbomind - WARNING - kwargs repetition_penalty is deprecated for inference, use GenerationConfig instead.
02/06 10:39:33 - turbomind - WARNING - kwargs ignore_eos is deprecated for inference, use GenerationConfig instead.
02/06 10:39:33 - turbomind - WARNING - kwargs random_seed is deprecated for inference, use GenerationConfig instead.
02/06 10:39:33 - turbomind - WARNING - kwargs request_output_len is deprecated for inference, use GenerationConfig instead.
', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', '小笼包怎么做?', ' 小笼包怎么做?', '小笼包怎么做?', '小

double enter to end input >>>

zhanghui-china avatar Feb 06 '24 02:02 zhanghui-china

从现象上看,一代自制模型量化后,回答基本上接近答案,但是格式未达到 xtuner chat训练后的结果。 而二代自制模型量化后,回答会出现死循环的情况。

还麻烦帮忙看看,是调用方法上出了什么问题

zhanghui-china avatar Feb 06 '24 02:02 zhanghui-china

请先确认下,没有量化前的模型 lmdeploy chat 效果,如果不正常,可能是没有匹配上正确的对话模板,需要手动指定 --model-name,具体有哪些 model name,可以通过 lmdeploy list 查看

pppppM avatar Feb 06 '24 03:02 pppppM

没有量化前的模型应该是正常的。因为已经投入使用了。

我这里演示下: (1)一代 lmdeploy chat turbomind ./merged --model-name internlm-chat-7b

`(lmdeploy021) zhanghui@zhanghui:~/shishen-full$ lmdeploy chat turbomind ./merged --model-name internlm-chat-7b /home/zhanghui/anaconda3/envs/lmdeploy021/lib/python3.10/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning') model_source: hf_model 02/06 15:08:32 - turbomind - WARNING - kwargs tp is deprecated to initialize model, use TurbomindEngineConfig instead. 02/06 15:08:32 - turbomind - WARNING - kwargs cache_max_entry_count is deprecated to initialize model, use TurbomindEngineConfig instead. 02/06 15:08:32 - lmdeploy - WARNING - Can not find tokenizer.json. It may take long time to initialize the tokenizer. model_config:

[llama] model_name = base tensor_para_size = 1 head_num = 32 kv_head_num = 32 vocab_size = 103168 num_layer = 32 inter_size = 11008 norm_eps = 1e-06 attn_bias = 1 start_id = 1 end_id = 2 session_len = 2056 weight_type = fp16 rotary_embedding = 128 rope_theta = 10000.0 size_per_head = 128 group_size = 0 max_batch_size = 128 max_context_token_num = 1 step_length = 1 cache_max_entry_count = 0.8 cache_block_seq_len = 128 cache_chunk_size = -1 num_tokens_per_iter = 0 max_prefill_iters = 1 extra_tokens_per_iter = 0 use_context_fmha = 1 quant_policy = 0 max_position_embeddings = 2048 rope_scaling_factor = 0.0 use_logn_attn = 0

get 323 model params [WARNING] gemm_config.in is not found; using default GEMM algo session 1

double enter to end input >>> 酸菜鱼怎么做

<|System|>:You are an AI assistant whose name is InternLM (书生·浦语).

  • InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
  • InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.

<|User|>:酸菜鱼怎么做 <|Bot|>: 02/06 15:08:42 - turbomind - WARNING - kwargs top_p is deprecated for inference, use GenerationConfig instead. 02/06 15:08:42 - turbomind - WARNING - kwargs top_k is deprecated for inference, use GenerationConfig instead. 02/06 15:08:42 - turbomind - WARNING - kwargs temperature is deprecated for inference, use GenerationConfig instead. 02/06 15:08:42 - turbomind - WARNING - kwargs repetition_penalty is deprecated for inference, use GenerationConfig instead. 02/06 15:08:42 - turbomind - WARNING - kwargs ignore_eos is deprecated for inference, use GenerationConfig instead. 02/06 15:08:42 - turbomind - WARNING - kwargs random_seed is deprecated for inference, use GenerationConfig instead. 02/06 15:08:42 - turbomind - WARNING - kwargs request_output_len is deprecated for inference, use GenerationConfig instead. 您需要准备以下食材: ['一条鱼', '一块酸菜', '一个蛋清', '适量盐', '适量淀粉', '适量料酒', '适量花椒', '适量干辣椒', '适量姜', '适量蒜', '适量小葱', '适量白胡椒粉', '适量火锅底料'] 按以下方法制作: ['鱼清洗干净,斜刀切,加入料酒,盐,白胡椒粉,蛋清,淀粉腌制半小时', '锅中倒油,加入火锅底料,花椒,干辣椒,姜蒜炒香,加入酸菜翻炒,加入开水,煮十分钟,加入鱼骨,煮十分钟,加入鱼 片,煮三分钟,加入葱花,完成']

double enter to end input >>>`

zhanghui-china avatar Feb 06 '24 07:02 zhanghui-china

(2)一代 cd ~/shishen2-full lmdeploy chat turbomind ./merged --model-name internlm2-chat-7b

有时候会出现死循环:(那是因为没有传入惩罚参数) ··· (lmdeploy021) zhanghui@zhanghui:~/shishen-full$ cd ~/shishen2-full (lmdeploy021) zhanghui@zhanghui:~/shishen2-full$ lmdeploy chat turbomind ./merged --model-name internlm2-chat-7b /home/zhanghui/anaconda3/envs/lmdeploy021/lib/python3.10/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning') model_source: hf_model 02/06 15:09:43 - turbomind - WARNING - kwargs tp is deprecated to initialize model, use TurbomindEngineConfig instead. 02/06 15:09:43 - turbomind - WARNING - kwargs cache_max_entry_count is deprecated to initialize model, use TurbomindEngineConfig instead. model_config:

[llama] model_name = base tensor_para_size = 1 head_num = 32 kv_head_num = 8 vocab_size = 92544 num_layer = 32 inter_size = 14336 norm_eps = 1e-05 attn_bias = 0 start_id = 1 end_id = 2 session_len = 2056 weight_type = fp16 rotary_embedding = 128 rope_theta = 1000000.0 size_per_head = 128 group_size = 0 max_batch_size = 128 max_context_token_num = 1 step_length = 1 cache_max_entry_count = 0.8 cache_block_seq_len = 128 cache_chunk_size = -1 num_tokens_per_iter = 0 max_prefill_iters = 1 extra_tokens_per_iter = 0 use_context_fmha = 1 quant_policy = 0 max_position_embeddings = 32768 rope_scaling_factor = 0.0 use_logn_attn = 0

get 259 model params [WARNING] gemm_config.in is not found; using default GEMM algo session 1

double enter to end input >>> 酸菜鱼怎么做

<|im_start|>system You are an AI assistant whose name is InternLM (书生·浦语).

  • InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
  • InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文. <|im_end|> <|im_start|>user 酸菜鱼怎么做<|im_end|> <|im_start|>assistant 02/06 15:10:22 - turbomind - WARNING - kwargs top_p is deprecated for inference, use GenerationConfig instead. 02/06 15:10:22 - turbomind - WARNING - kwargs top_k is deprecated for inference, use GenerationConfig instead. 02/06 15:10:22 - turbomind - WARNING - kwargs temperature is deprecated for inference, use GenerationConfig instead. 02/06 15:10:22 - turbomind - WARNING - kwargs repetition_penalty is deprecated for inference, use GenerationConfig instead. 02/06 15:10:22 - turbomind - WARNING - kwargs ignore_eos is deprecated for inference, use GenerationConfig instead. 02/06 15:10:22 - turbomind - WARNING - kwargs random_seed is deprecated for inference, use GenerationConfig instead. 02/06 15:10:22 - turbomind - WARNING - kwargs request_output_len is deprecated for inference, use GenerationConfig instead. 您需要准备以下食材: ['1条草鱼', '1把酸菜', '1勺淀粉', '5片姜', '5克盐', '5克胡椒粉', '5克味精', '5克白糖', '1勺料酒', '1勺老抽', '5克白醋', '1勺生抽', '1勺豆瓣酱', '1个蛋清', '1勺猪油', '5克小香葱'] 按以下方法制作: ['草鱼一条,洗净', '片鱼', '酸菜,洗净', '酸菜切小段', '姜切丝', '鱼骨洗净', '鱼骨用料酒,生抽,老抽,盐,胡椒粉,淀粉,味精,腌半小时', '鱼片用蛋清,淀粉,胡椒粉,盐,味精腌半小 时', '热锅,放入猪油', '放入豆瓣酱', '放入姜', '放入酸菜', '放入鱼骨', '放入水', '大火煮10分钟', '放入鱼片', '放入味精', '放入胡椒粉', '放入白醋', '放入香葱', '出锅']

double enter to end input >>> 水煮鱼怎么做?

<|im_start|>user 水煮鱼怎么做?<|im_end|> <|im_start|>assistant 您需要准备以下食材: ['1条草鱼', '适量花椒', '1勺豆瓣酱', '1勺生抽', '1勺老抽', '1勺淀粉', '1勺料酒', '适量盐', '1勺鸡精', '1勺白糖', '1勺辣椒面', '1勺白芝麻', '1勺熟油', '1勺麻椒油', '1勺藤椒油', '1 个蛋清', '1勺胡椒粉', '1勺耗油', '1勺姜', '1勺葱', '1勺大蒜', '1勺黄豆酱', '1勺豆豉', '1勺白糖', '1勺鸡精', '1勺淀粉', '1勺辣椒面', '1勺白芝麻', '1勺熟油', '1勺麻椒油', '1勺藤椒油', '1勺藤椒油', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄 豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1 勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆 酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺 黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺 黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '1勺黄豆酱', '

double enter to end input >>> ···

zhanghui-china avatar Feb 06 '24 07:02 zhanghui-china