fenglinbei
fenglinbei
感谢感谢
把模型原始文件copy一份出来,后面读取模型直接从路径读取
模型好像只支持4096个token,等官方更新吧
https://huggingface.co/baichuan-inc/Baichuan-13B-Chat/discussions/8
``` from transformers.generation.utils import GenerationConfig model.generation_config = GenerationConfig(**{ "assistant_token_id": 196, "bos_token_id": 1, "do_sample": True, "eos_token_id": 2, "max_new_tokens": 4, "pad_token_id": 0, "repetition_penalty": 1.1, "temperature": 0.3, "top_k": 5, "top_p": 0.85, "transformers_version": "4.30.2",...
> model 没有chat 属性,下面这句话怎么调通的? model.chat(tokenizer, messages, stream=True): model有chat方法,可能是你模型加载参数没设置好?
希望能支持Qwen-72B