inference icon indicating copy to clipboard operation
inference copied to clipboard

QUESTIONAttributeError: [address=0.0.0.0:33767, pid=5848] Model model_format='pytorch' model_size_in_billions=13 quantizations=['4-bit', '8-bit', 'none'] model_id='FlagAlpha/Llama2-Chinese-13b-Chat' model_hub='huggingface' model_uri='file:///root/chinese-llama2' model_revision=None is not for chat.

Open ghkl98 opened this issue 1 year ago • 16 comments

我想部署自己的本地模型,写的model.json如下 { "version": 1, "context_length": 2048, "model_name": "customer-llama-2", "model_lang": [ "en", "zh" ], "model_ability": [ "chat" ], "model_specs": [ { "model_format": "pytorch", "model_size_in_billions": 13, "quantizations": [ "4-bit", "8-bit", "none" ], "model_id": "FlagAlpha/Llama2-Chinese-13b-Chat", "model_uri": "file:///root/chinese-llama2" } ] } 在xinference可以运行: image

但是在dify上就会报错AttributeError: [address=0.0.0.0:33767, pid=5848] Model model_format='pytorch' model_size_in_billions=13 quantizations=['4-bit', '8-bit', 'none'] model_id='FlagAlpha/Llama2-Chinese-13b-Chat' model_hub='huggingface' model_uri='file:///root/chinese-llama2' model_revision=None is not for chat.

ghkl98 avatar Nov 26 '23 04:11 ghkl98

Traceback (most recent call last): File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/api/restful_api.py", line 353, in describe_model data = await (await self._get_supervisor_ref()).describe_model(model_uid) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/pool.py", line 657, in send result = await self._run_coro(message.message_id, coro) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/pool.py", line 368, in _run_coro return await coro File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/api.py", line 306, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/core/utils.py", line 30, in wrapped ret = await func(*args, **kwargs) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/core/supervisor.py", line 474, in describe_model raise ValueError(f"Model not found in the model list, uid: {model_uid}") ValueError: [address=0.0.0.0:48099, pid=44353] Model not found in the model list, uid: 1e3681f8-8c14-11ee-a696-b36a74d71b49 2023-11-26 04:56:03,922 xinference.api.restful_api 44285 ERROR [address=0.0.0.0:35837, pid=44729] Model model_format='pytorch' model_size_in_billions=13 quantizations=['none'] model_id='FlagAlpha/Llama2-Chinese-13b-Chat' model_hub='huggingface' model_uri='file:///root/chinese-llama2' model_revision=None is not for chat. Traceback (most recent call last): File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/api/restful_api.py", line 791, in create_chat_completion data = await model.chat(prompt, system_prompt, chat_history, kwargs) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/pool.py", line 657, in send result = await self._run_coro(message.message_id, coro) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/backends/pool.py", line 368, in _run_coro return await coro File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xoscar/api.py", line 306, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/core/utils.py", line 30, in wrapped ret = await func(*args, **kwargs) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/core/model.py", line 67, in wrapped_func ret = await fn(self, *args, **kwargs) File "/root/miniconda3/envs/py39_mhy/lib/python3.9/site-packages/xinference/core/model.py", line 236, in chat raise AttributeError(f"Model {self._model.model_spec} is not for chat.") AttributeError: [address=0.0.0.0:35837, pid=44729] Model model_format='pytorch' model_size_in_billions=13 quantizations=['none'] model_id='FlagAlpha/Llama2-Chinese-13b-Chat' model_hub='huggingface' model_uri='file:///root/chinese-llama2' model_revision=None is not for chat.

ghkl98 avatar Nov 26 '23 04:11 ghkl98

确定注册的时候选的是 chat 吗,看上去是模型只有 generate 能力,没有 chat 能力。

aresnow1 avatar Nov 27 '23 02:11 aresnow1

我也遇到这个问题,model_ability只保留chat就好了

faroasis avatar Nov 28 '23 08:11 faroasis

不过现在有个问题是中文会乱码... image

faroasis avatar Nov 28 '23 08:11 faroasis

这个模型的链接发一下? @faroasis

aresnow1 avatar Nov 28 '23 08:11 aresnow1

这个模型的链接发一下? @faroasis

https://huggingface.co/FlagAlpha/Llama2-Chinese-7b-Chat

@aresnow1

下面是注册配置,运行时不使用量化

{ "version": 1, "context_length": 2048, "model_name": "llama-2-cn-chat-7b", "model_lang": [ "zh" ], "model_ability": [ "chat" ], "model_description": "This is a custom model description.", "model_specs": [ { "model_format": "pytorch", "model_size_in_billions": 7, "quantizations": [ "4-bit", "8-bit", "none" ], "model_id": "", "model_hub": "huggingface", "model_uri": "C:\\llama2\\cn_chat", "model_revision": null } ], "prompt_style": { "style_name": "LLAMA2", "system_prompt": "<s>[INST] <<SYS>>\nYou are a helpful AI assistant.\n<</SYS>>\n\n", "roles": [ "[INST]", "[/INST]" ], "intra_message_sep": " ", "inter_message_sep": " </s><s>", "stop": null, "stop_token_ids": null } }

faroasis avatar Nov 29 '23 01:11 faroasis

roles 改成 ["Human", "Assistant"] 这个试一下 @faroasis

aresnow1 avatar Nov 30 '23 02:11 aresnow1

roles 改成 ["Human", "Assistant"] 这个试一下 @faroasis

问题不在这里,应该是LlamaTokenizer词表太小,中文被切分了,如果是对整个output_ids做decode就是完整的中文字符。目前按照stream_interval输出的方式就是会把中文字符切碎。 LlamaTokenizer(name_or_path='C:\llama2\cn_chat', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False)}, clean_up_tokenization_spaces=False)

另外,我想要以4bit加载模型的话,需要修改model/llm/pytorch/core.py 384行 if self._device == "cuda":# and self._is_linux(): 至少在我的机器上是没问题的,win10 pro,torch201+cuda 117。

@aresnow1

faroasis avatar Dec 01 '23 12:12 faroasis

roles 改成 ["Human", "Assistant"] 这个试一下 @faroasis

问题不在这里,应该是LlamaTokenizer词表太小,中文被切分了,如果是对整个output_ids做decode就是完整的中文字符。目前按照stream_interval输出的方式就是会把中文字符切碎。 LlamaTokenizer(name_or_path='C:\llama2\cn_chat', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False)}, clean_up_tokenization_spaces=False)

另外,我想要以4bit加载模型的话,需要修改model/llm/pytorch/core.py 384行 if self._device == "cuda":# and self._is_linux(): 至少在我的机器上是没问题的,win10 pro,torch201+cuda 117。

@aresnow1

请问在哪里配置LlamaTokenizer呢?我在Xinference自定义模型配置中并没有找到,我也遇到了中文乱码的问题🥹

xiaolibuzai-ovo avatar Dec 03 '23 07:12 xiaolibuzai-ovo

roles 改成 ["Human", "Assistant"] 这个试一下 @faroasis

问题不在这里,应该是LlamaTokenizer词表太小,中文被切分了,如果是对整个output_ids做decode就是完整的中文字符。目前按照stream_interval输出的方式就是会把中文字符切碎。 LlamaTokenizer(name_or_path='C:\llama2\cn_chat', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False)}, clean_up_tokenization_spaces=False) 另外,我想要以4bit加载模型的话,需要修改model/llm/pytorch/core.py 384行 if self._device == "cuda":# and self._is_linux(): 至少在我的机器上是没问题的,win10 pro,torch201+cuda 117。 @aresnow1

请问在哪里配置LlamaTokenizer呢?我在Xinference自定义模型配置中并没有找到,我也遇到了中文乱码的问题🥹

不是配置的,我改的源码。。

faroasis avatar Dec 04 '23 00:12 faroasis

roles 改成 ["Human", "Assistant"] 这个试一下 @faroasis

问题不在这里,应该是LlamaTokenizer词表太小,中文被切分了,如果是对整个output_ids做decode就是完整的中文字符。目前按照stream_interval输出的方式就是会把中文字符切碎。 LlamaTokenizer(name_or_path='C:\llama2\cn_chat', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False)}, clean_up_tokenization_spaces=False) 另外,我想要以4bit加载模型的话,需要修改model/llm/pytorch/core.py 384行 if self._device == "cuda":# and self._is_linux(): 至少在我的机器上是没问题的,win10 pro,torch201+cuda 117。 @aresnow1

请问在哪里配置LlamaTokenizer呢?我在Xinference自定义模型配置中并没有找到,我也遇到了中文乱码的问题🥹

不是配置的,我改的源码。。

请问下是哪里的源码呢

xiaolibuzai-ovo avatar Dec 04 '23 01:12 xiaolibuzai-ovo

我没用这里的源码,是直接在site-packages里面改的

faroasis avatar Dec 04 '23 01:12 faroasis

我没用这里的源码,是直接在site-packages里面改的

请问是Xinference的源码吗?还是模型的

xiaolibuzai-ovo avatar Dec 04 '23 01:12 xiaolibuzai-ovo

谢谢分享,如果愿意的话可以提个 PR 帮忙修复下吗?

faroasis @.***>于2023年12月1日 周五20:27写道:

roles 改成 ["Human", "Assistant"] 这个试一下 @faroasis https://github.com/faroasis

问题不在这里,应该是LlamaTokenizer词表太小,中文被切分了,如果是对整个output_ids做decode就是完整的中文字符。目前按照stream_interval输出的方式就是会把中文字符切碎。 LlamaTokenizer(name_or_path='C:\llama2\cn_chat', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False), 'eos_token': AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False), 'unk_token': AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False)}, clean_up_tokenization_spaces=False)

另外,我想要以4bit加载模型的话,需要修改model/llm/pytorch/core.py 384行 if self._device == "cuda":# and self._is_linux(): 至少在我的机器上是没问题的,win10 pro,torch201+cuda 117。

@aresnow1 https://github.com/aresnow1

— Reply to this email directly, view it on GitHub https://github.com/xorbitsai/inference/issues/685#issuecomment-1836039293, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2EQINQMQH4YFYLYA2BXCBTYHHEM3AVCNFSM6AAAAAA72SDCJOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZWGAZTSMRZGM . You are receiving this because you were mentioned.Message ID: @.***>

aresnow1 avatar Dec 04 '23 01:12 aresnow1

\site-packages\xinference\model\llm\pytorch\core.py
PytorchModel._load_model

site-packages\xinference\model\llm\pytorch\utils.py generate_stream() 215行附近

我也没想出什么办法,因为没办法确定哪些是一个完整字符... 不过官方的demo没问题,我还没搞懂 -_- https://github.com/FlagAlpha/Llama2-Chinese/blob/main/examples/chat_gradio.py

@aresnow1

faroasis avatar Dec 04 '23 02:12 faroasis

This issue is stale because it has been open for 7 days with no activity.

github-actions[bot] avatar Aug 08 '24 19:08 github-actions[bot]

This issue was closed because it has been inactive for 5 days since being marked as stale.

github-actions[bot] avatar Aug 13 '24 19:08 github-actions[bot]