inference
inference copied to clipboard
BUG无法启动baichuan2-13B-chat
Describe the bug
启动baichuan2-13B-chat时报错
To Reproduce
- Your Python version. 3.9.17
- The version of xinference you use. 0.6.2 CUDA 11.6
Expected behavior
A clear and concise description of what you expected to happen.
Additional context
2023-11-16 09:18:02,923 xinference.api.restful_api 209466 ERROR [address=0.0.0.0:32783, pid=215568] _set_gradient_checkpointing() got an unexpected keyword argument 'enable' Traceback (most recent call last): File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/api/restful_api.py", line 408, in launch_model model_uid = await (await self._get_supervisor_ref()).launch_builtin_model( File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/pool.py", line 657, in send result = await self._run_coro(message.message_id, coro) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/pool.py", line 368, in _run_coro return await coro File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/api.py", line 306, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/core/supervisor.py", line 398, in launch_builtin_model await _launch_one_model(rep_model_uid) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/core/supervisor.py", line 367, in _launch_one_model await worker_ref.launch_builtin_model( File "xoscar/core.pyx", line 284, in __pyx_actor_method_wrapper async with lock: File "xoscar/core.pyx", line 287, in xoscar.core.__pyx_actor_method_wrapper result = await result File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/core/utils.py", line 27, in wrapped ret = await func(*args, **kwargs) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/core/worker.py", line 316, in launch_builtin_model await model_ref.load() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/pool.py", line 657, in send result = await self._run_coro(message.message_id, coro) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/backends/pool.py", line 368, in _run_coro return await coro File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xoscar/api.py", line 306, in on_receive return await super().on_receive(message) # type: ignore File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', File "xoscar/core.pyx", line 524, in xoscar.core._BaseActor.on_receive result = func(*args, **kwargs) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/core/model.py", line 166, in load self._model.load() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/model/llm/pytorch/core.py", line 182, in load self._model, self._tokenizer = self._load_model(**kwargs) File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/xinference/model/llm/pytorch/baichuan.py", line 60, in _load_model model = AutoModelForCausalLM.from_pretrained( File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/root/.cache/huggingface/modules/transformers_modules/baichuan-2-chat-pytorch-13b/modeling_baichuan.py", line 670, in from_pretrained return super(BaichuanForCausalLM, cls).from_pretrained(pretrained_model_name_or_path, *model_args, File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3236, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/root/.cache/huggingface/modules/transformers_modules/baichuan-2-chat-pytorch-13b/modeling_baichuan.py", line 539, in init self.model = BaichuanModel(config) File "/root/.cache/huggingface/modules/transformers_modules/baichuan-2-chat-pytorch-13b/modeling_baichuan.py", line 305, in init self.post_init() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1159, in post_init self._backward_compatibility_gradient_checkpointing() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1163, in _backward_compatibility_gradient_checkpointing self.gradient_checkpointing_enable() File "/data/conda_cache/envs/xinference/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1872, in gradient_checkpointing_enable self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func=gradient_checkpointing_func) TypeError: [address=0.0.0.0:32783, pid=215568] _set_gradient_checkpointing() got an unexpected keyword argument 'enable'
Try pip install transformers==4.33
, we are solving the issues with latest version.
This issue is stale because it has been open for 7 days with no activity.
This issue was closed because it has been inactive for 5 days since being marked as stale.