inference icon indicating copy to clipboard operation
inference copied to clipboard

Server error: 500 - [address=0.0.0.0:36357, pid=110186] expected str, bytes or os.PathLike object, not NoneType

Open Remember12344 opened this issue 2 months ago • 3 comments

System Info / 系統信息

Linux, Cuda 13.0, transformers 4.57.1, Python 11.3 部署Qwen3-14b, transformers, gptq 报错: Server error: 500 - [address=0.0.0.0:36357, pid=110186] expected str, bytes or os.PathLike object, not NoneType 详细报错:INFO 10-29 01:18:07 [init.py:216] Automatically detected platform cuda. 2025-10-29 01:18:10,538 xinference.core.model 110646 INFO Start requests handler. 2025-10-29 01:18:10,554 transformers.models.auto.tokenization_auto 110646 INFO Could not locate the tokenizer configuration file, will try to use the model config instead. Could not locate the tokenizer configuration file, will try to use the model config instead. 2025-10-29 01:18:10,558 transformers.configuration_utils 110646 INFO Model config Qwen3Config { "attention_bias": false, "attention_dropout": 0.0, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 22016, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000.0, "sliding_window": null, "tie_word_embeddings": false, "transformers_version": "4.57.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 }

Model config Qwen3Config { "attention_bias": false, "attention_dropout": 0.0, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 22016, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000.0, "sliding_window": null, "tie_word_embeddings": false, "transformers_version": "4.57.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 }

2025-10-29 01:18:10,562 transformers.tokenization_utils_base 110646 INFO loading file vocab.json loading file vocab.json 2025-10-29 01:18:10,562 transformers.tokenization_utils_base 110646 INFO loading file merges.txt loading file merges.txt 2025-10-29 01:18:10,563 transformers.tokenization_utils_base 110646 INFO loading file tokenizer.json loading file tokenizer.json 2025-10-29 01:18:10,563 transformers.tokenization_utils_base 110646 INFO loading file added_tokens.json loading file added_tokens.json 2025-10-29 01:18:10,563 transformers.tokenization_utils_base 110646 INFO loading file special_tokens_map.json loading file special_tokens_map.json 2025-10-29 01:18:10,563 transformers.tokenization_utils_base 110646 INFO loading file tokenizer_config.json loading file tokenizer_config.json 2025-10-29 01:18:10,563 transformers.tokenization_utils_base 110646 INFO loading file chat_template.jinja loading file chat_template.jinja 2025-10-29 01:18:10,564 transformers.configuration_utils 110646 INFO Model config Qwen3Config { "attention_bias": false, "attention_dropout": 0.0, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 22016, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000.0, "sliding_window": null, "tie_word_embeddings": false, "transformers_version": "4.57.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 }

Model config Qwen3Config { "attention_bias": false, "attention_dropout": 0.0, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 22016, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000.0, "sliding_window": null, "tie_word_embeddings": false, "transformers_version": "4.57.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 }

2025-10-29 01:18:10,566 xinference.core.worker 110466 ERROR Failed to load model qwen3-0 Traceback (most recent call last): File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/worker.py", line 1140, in launch_builtin_model await model_ref.load() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 262, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 111, in _process_result_message raise message.as_instanceof_cause() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 689, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 389, in _run_coro return await coro File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/api.py", line 418, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 564, in on_receive raise ex File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/model.py", line 387, in load await asyncio.to_thread(self._model.load) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 1014, in load super().load() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 343, in load self._model, self._tokenizer = self._load_model(**kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 212, in _load_model tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 1159, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2097, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2135, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2343, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/models/qwen2/tokenization_qwen2.py", line 172, in init with open(vocab_file, encoding="utf-8") as vocab_handle: ^^^^^^^^^^^^^^^^^ TypeError: [address=0.0.0.0:39061, pid=110646] expected str, bytes or os.PathLike object, not NoneType 2025-10-29 01:18:10,598 xinference.core.worker 110466 ERROR [request 0f497d7a-b422-11f0-9357-bcfce7685faa] Leave launch_builtin_model, error: [address=0.0.0.0:39061, pid=110646] expected str, bytes or os.PathLike object, not NoneType, elapsed time: 7 s Traceback (most recent call last): File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/utils.py", line 93, in wrapped ret = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/worker.py", line 1140, in launch_builtin_model await model_ref.load() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 262, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 111, in _process_result_message raise message.as_instanceof_cause() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 689, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 389, in _run_coro return await coro File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/api.py", line 418, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 564, in on_receive raise ex File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/model.py", line 387, in load await asyncio.to_thread(self._model.load) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 1014, in load super().load() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 343, in load self._model, self._tokenizer = self._load_model(**kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 212, in _load_model tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 1159, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2097, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2135, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2343, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/models/qwen2/tokenization_qwen2.py", line 172, in init with open(vocab_file, encoding="utf-8") as vocab_handle: ^^^^^^^^^^^^^^^^^ TypeError: [address=0.0.0.0:39061, pid=110646] expected str, bytes or os.PathLike object, not NoneType 2025-10-29 01:18:10,601 xinference.api.restful_api 110373 ERROR [address=0.0.0.0:39061, pid=110646] expected str, bytes or os.PathLike object, not NoneType Traceback (most recent call last): File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/api/restful_api.py", line 1241, in launch_model model_uid = await (await self._get_supervisor_ref()).launch_builtin_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 262, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 111, in _process_result_message raise message.as_instanceof_cause() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 689, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 389, in _run_coro return await coro File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/api.py", line 418, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 564, in on_receive raise ex File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/supervisor.py", line 1314, in launch_builtin_model await _launch_model() ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/supervisor.py", line 1249, in _launch_model subpool_address = await _launch_one_model( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/supervisor.py", line 1200, in _launch_one_model subpool_address = await worker_ref.launch_builtin_model( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 262, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 111, in _process_result_message raise message.as_instanceof_cause() ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 689, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 389, in _run_coro return await coro File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/api.py", line 418, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 564, in on_receive raise ex File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/utils.py", line 93, in wrapped ret = await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/worker.py", line 1140, in launch_builtin_model await model_ref.load() ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 262, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/context.py", line 111, in _process_result_message raise message.as_instanceof_cause() ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 689, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/backends/pool.py", line 389, in _run_coro return await coro File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xoscar/api.py", line 418, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 564, in on_receive raise ex File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 527, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 532, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/core/model.py", line 387, in load await asyncio.to_thread(self._model.load) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 1014, in load super().load() File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 343, in load self._model, self._tokenizer = self._load_model(**kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/xinference/model/llm/transformers/core.py", line 212, in _load_model tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 1159, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2097, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2135, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2343, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^ File "/data/conda/envs/xinference_env/lib/python3.11/site-packages/transformers/models/qwen2/tokenization_qwen2.py", line 172, in init with open(vocab_file, encoding="utf-8") as vocab_handle: ^^^^^^^^^^^^^^^^^ TypeError: [address=0.0.0.0:39061, pid=110646] expected str, bytes or os.PathLike object, not NoneType

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • [ ] docker / docker
  • [x] pip install / 通过 pip install 安装
  • [ ] installation from source / 从源码安装

Version info / 版本信息

xinference 1.11.0.post1

The command used to start Xinference / 用以启动 xinference 的命令

UI

Reproduction / 复现过程

n.a

Expected behavior / 期待表现

成功部署

Remember12344 avatar Oct 28 '25 17:10 Remember12344

我们复现下。

qinxuye avatar Oct 29 '25 09:10 qinxuye

我们复现下。

解决了吗?

Remember12344 avatar Oct 31 '25 15:10 Remember12344

This issue is stale because it has been open for 7 days with no activity.

github-actions[bot] avatar Nov 07 '25 19:11 github-actions[bot]

This issue is stale because it has been open for 7 days with no activity.

github-actions[bot] avatar Nov 16 '25 19:11 github-actions[bot]