Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

[BUG] RuntimeError: Internal: could not parse ModelProto from chatglm3-6b/tokenizer.model

Open ZhichaoOuyang opened this issue 11 months ago • 2 comments

运行 python startup.py -a

报错: ==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-4.18.0-2.4.3.kwai.x86_64-x86_64-with-glibc2.27. python版本:3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cpu {'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': 'chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '0.0.0.0', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cpu ==============================Langchain-Chatchat Configuration==============================

2024-02-28 08:16:53,771 - startup.py[line:655] - INFO: 正在启动服务: 2024-02-28 08:16:53,771 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/ouyangzhichao/Langchain-Chatchat/logs /home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-02-28 08:17:01 | INFO | model_worker | Register to controller 2024-02-28 08:17:01 | ERROR | stderr | INFO: Started server process [3160451] 2024-02-28 08:17:01 | ERROR | stderr | INFO: Waiting for application startup. 2024-02-28 08:17:01 | ERROR | stderr | INFO: Application startup complete. 2024-02-28 08:17:01 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit) 2024-02-28 08:17:01 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 05c9ab76 ... 2024-02-28 08:17:02 | ERROR | stderr | Process model_worker - chatglm3-6b: 2024-02-28 08:17:02 | ERROR | stderr | Traceback (most recent call last): 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap 2024-02-28 08:17:02 | ERROR | stderr | self.run() 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/multiprocessing/process.py", line 108, in run 2024-02-28 08:17:02 | ERROR | stderr | self._target(*self._args, **self._kwargs) 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/Langchain-Chatchat/startup.py", line 389, in run_model_worker 2024-02-28 08:17:02 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/Langchain-Chatchat/startup.py", line 217, in create_model_worker_app 2024-02-28 08:17:02 | ERROR | stderr | worker = ModelWorker( 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 77, in init 2024-02-28 08:17:02 | ERROR | stderr | self.model, self.tokenizer = load_model( 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 348, in load_model 2024-02-28 08:17:02 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 816, in load_model 2024-02-28 08:17:02 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained( 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 801, in from_pretrained 2024-02-28 08:17:02 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained 2024-02-28 08:17:02 | ERROR | stderr | return cls._from_pretrained( 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2261, in _from_pretrained 2024-02-28 08:17:02 | ERROR | stderr | tokenizer = cls(*init_inputs, **init_kwargs) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/.cache/huggingface/modules/transformers_modules/THUDM/chatglm3-6b/9addbe01105ca1939dd60a0e5866a1812be9daea/tokenization_chatglm.py", line 109, in init 2024-02-28 08:17:02 | ERROR | stderr | self.tokenizer = SPTokenizer(vocab_file) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/.cache/huggingface/modules/transformers_modules/THUDM/chatglm3-6b/9addbe01105ca1939dd60a0e5866a1812be9daea/tokenization_chatglm.py", line 18, in init 2024-02-28 08:17:02 | ERROR | stderr | self.sp_model = SentencePieceProcessor(model_file=model_path) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/sentencepiece/init.py", line 468, in Init 2024-02-28 08:17:02 | ERROR | stderr | self.Load(model_file=model_file, model_proto=model_proto) 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/sentencepiece/init.py", line 961, in Load 2024-02-28 08:17:02 | ERROR | stderr | return self.LoadFromFile(model_file) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | File "/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/sentencepiece/init.py", line 316, in LoadFromFile 2024-02-28 08:17:02 | ERROR | stderr | return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 2024-02-28 08:17:02 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 | ERROR | stderr | RuntimeError: Internal: could not parse ModelProto from chatglm3-6b/tokenizer.model

前置步骤都按照要求搞了,求大佬帮忙看看

ZhichaoOuyang avatar Feb 28 '24 08:02 ZhichaoOuyang

🤖

The error message "RuntimeError: Internal: could not parse ModelProto from chatglm3-6b/tokenizer.model" suggests that there is an issue with loading the tokenizer model for the chatglm3-6b model. This could be due to several reasons, such as the model file being corrupted, missing, or incompatible with the current setup.

To resolve this issue, follow these steps:

  1. Verify Model Path: Ensure that the chatglm3-6b model and its tokenizer are correctly placed in the directory specified by MODEL_ROOT_PATH in your model_config.py.example file. If MODEL_ROOT_PATH is empty, the model should be in a directory that matches its key in the MODEL_PATH dictionary, relative to the script's running directory.

  2. Check Model Integrity: Make sure that the chatglm3-6b model files, especially the tokenizer model file, are not corrupted. You might want to re-download or obtain a fresh copy of the model and its tokenizer.

  3. Compatibility Check: Ensure that the model and tokenizer are compatible with the libraries and Python version you are using. Since you are using Python version 3.11.5, verify that all dependencies (like TensorFlow, PyTorch, or any other relevant library) are up-to-date and compatible with Python 3.11.5.

  4. Update Dependencies: Sometimes, updating your dependencies can resolve compatibility issues. Make sure that your environment has the latest versions of necessary libraries (e.g., Hugging Face Transformers, if used).

  5. Model Configuration: Double-check the model_config.py.example file to ensure that the path to the chatglm3-6b model and its tokenizer is correctly specified. If you have modified the default paths or names, ensure that these modifications are accurately reflected in the configuration.

If after following these steps the issue persists, it might be helpful to look for any specific documentation or community posts related to loading the chatglm3-6b model in the context of your project or the libraries you are using.

To load the chatglm3-6b model in the Langchain-Chatchat project, follow these steps:

  1. Ensure you have the model file for chatglm3-6b. According to the provided model_config.py.example file, the model identifier for chatglm3-6b is "THUDM/chatglm3-6b". This suggests that the model is hosted on Hugging Face. You should download the model from Hugging Face or ensure it's accessible in your environment.

  2. Set the MODEL_ROOT_PATH in the model_config.py.example file to the directory where you store all your models. If you place the chatglm3-6b model directly under this directory or within a subdirectory named after the model, the program will automatically detect and load it without needing to modify the MODEL_PATH dictionary for llm_model.

  3. In the LLM_MODELS list, chatglm3-6b is already included and is set as the first model, making it the default model for the API and WEBUI. If it wasn't, you would add "chatglm3-6b" to the list to include it for loading at project startup.

  4. Ensure the LLM_DEVICE is set appropriately for your hardware. It's set to "auto" by default, which automatically detects the hardware but may issue a warning. You can manually set it to "cuda", "mps", "cpu", or "xpu" depending on your system's capabilities and requirements.

  5. Rename model_config.py.example to model_config.py after making the necessary adjustments.

Here's a summary of the key configurations from your context:

MODEL_ROOT_PATH = ""  # Set this to your models' directory path

LLM_MODELS = ["chatglm3-6b", "zhipu-api", "openai-api"]  # chatglm3-6b is already set as the default model

LLM_DEVICE = "auto"  # Adjust if necessary for your hardware

MODEL_PATH = {
    "llm_model": {
        "chatglm3-6b": "THUDM/chatglm3-6b",
        # other models...
    },
    # other model configurations...
}

Ensure you have the chatglm3-6b model stored correctly in the directory specified by MODEL_ROOT_PATH or accessible online through Hugging Face with the identifier "THUDM/chatglm3-6b".

For more information, you can refer to the model_config.py.example file in the Langchain-Chatchat repository.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Feb 28 '24 08:02 dosubot[bot]

你模型是在hf下载的么,是不是模型没全

zRzRzRzRzRzRzR avatar Feb 28 '24 12:02 zRzRzRzRzRzRzR

你模型是在hf下载的么,是不是模型没全

对,是不全的问题,重新下解决了

ZhichaoOuyang avatar Feb 29 '24 02:02 ZhichaoOuyang

运行 python startup.py -a

报错: ==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-4.18.0-2.4.3.kwai.x86_64-x86_64-with-glibc2.27. python版本:3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354.fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cpu {'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': 'chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '0.0.0.0', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cpu ==============================Langchain-Chatchat Configuration==============================

2024-02-28 08:16:53,771 - startup.py[line:655] - INFO: 正在启动服务: 2024-02-28 08:16:53,771 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/ouyangzhichao/langchain-chatchat/logs /home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-02-28 08:17:01 |资讯 |model_worker |注册控制器 2024-02-28 08:17:01 |错误 |斯特德尔 |信息:服务器进程已启动 [3160451] 2024-02-28 08:17:01 |错误 |斯特德尔 |信息:正在等待应用程序启动。 2024-02-28 08:17:01 |错误 |斯特德尔 |信息:应用程序启动完成。 2024-02-28 08:17:01 |错误 |斯特德尔 |信息:Uvicorn 在 http://0.0.0.0:20000 上运行(按 CTRL+C 退出) 2024-02-28 08:17:01 |资讯 |model_worker |在工作器 05c9ab76 上加载模型 ['chatglm3-6b'] ... 2024-02-28 08:17:02 |错误 |斯特德尔 |流程model_worker - chatglm3-6b: 2024-02-28 08:17:02 |错误 |斯特德尔 |回溯(最近一次调用最后):2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/multiprocessing/process.py”,第 314 行,在 _bootstrap 2024-02-28 08:17:02 |错误 |斯特德尔 |自我运行() 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/multiprocessing/process.py”,第 108 行,正在运行 2024-02-28 08:17:02 |错误 |斯特德尔 |self._target(*self._args, **self._kwargs) 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/langchain-chatchat/startup.py”,第 389 行,在 run_model_worker 2024-02-28 08:17:02 |错误 |斯特德尔 |app = create_model_worker_app(log_level=log_level, **kwargs) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/langchain-chatchat/startup.py”,第 217 行,在 create_model_worker_app 2024-02-28 08:17:02 |错误 |斯特德尔 |worker = 模型工人( 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/fastchat/serve/model_worker.py”,第 77 行,init 2024-02-28 08:17:02 |错误 |斯特德尔 |self.model, self.tokenizer = load_model( 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/fastchat/model/model_adapter.py”,第 348 行,load_model 2024-02-28 08:17:02 |错误 |斯特德尔 |模型,分词器 = adapter.load_model(model_path, kwargs) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/fastchat/model/model_adapter.py”,第 816 行,在 load_model 2024-02-28 08:17:02 |错误 |斯特德尔 |分词器 = AutoTokenizer.from_pretrained( 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件 “/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py“,801行,在from_pretrained 2024-02-28 08:17:02 |错误 |斯特德尔 |返回 tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py”,第 2029 行,在 from_pretrained 2024-02-28 08:17:02 |错误 |斯特德尔 |返回cls._from_pretrained( 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py”,第 2261 行,_from_pretrained 2024-02-28 08:17:02 |错误 |斯特德尔 |分词器 = cls(*init_inputs, **init_kwargs) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/.cache/huggingface/modules/transformers_modules/THUDM/chatglm3-6b/9addbe01105ca1939dd60a0e5866a1812be9daea/tokenization_chatglm.py”,第 109 行,init 2024-02-28 08:17:02 |错误 |斯特德尔 |self.tokenizer = SPTokenizer(vocab_file) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/.cache/huggingface/modules/transformers_modules/THUDM/chatglm3-6b/9addbe01105ca1939dd60a0e5866a1812be9daea/tokenization_chatglm.py”,第 18 行,init 2024-02-28 08:17:02 |错误 |斯特德尔 |self.sp_model = SentencePieceProcessor(model_file=model_path) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/sentencepiece/init.py”,第 468 行,在 Init 中 2024-02-28 08:17:02 |错误 |斯特德尔 |自我。负载(model_file=model_file, model_proto=model_proto) 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/sentencepiece/init.py”,第 961 行,加载 2024-02-28 08:17:02 |错误 |斯特德尔 |返回自我。LoadFromFile(model_file) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |文件“/home/ouyangzhichao/anaconda3/lib/python3.11/site-packages/sentencepiece/init.py”,第 316 行,在 LoadFromFile 2024-02-28 08:17:02 |错误 |斯特德尔 |返回_sentencepiece。SentencePieceProcessor_LoadFromFile(self, arg) 2024-02-28 08:17:02 |错误 |斯特德尔 |^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28 08:17:02 |错误 |斯特德尔 |RuntimeError:内部:无法从 chatglm3-6b/tokenizer.model 解析 ModelProto

前置步骤都按照要求搞了,求大佬帮忙看看

兄弟,你是怎么解决的呀,我也出现这个问题了

liyiyiya avatar Apr 23 '24 04:04 liyiyiya

你模型是在hf下载的么,是不是模型没全

对,是不全的问题,重新下解决了

兄弟怎么重新下呀?跟着文档重新来一遍吗?还需要初始化吗?

liyiyiya avatar Apr 23 '24 04:04 liyiyiya

image 我的是这样,这是全的吗。我看不缺呀

liyiyiya avatar Apr 23 '24 04:04 liyiyiya

哥们,我遇到了同样的问题,你试试重新上传一下tokenizer.model能不能解决。

swordqiqi avatar Apr 25 '24 03:04 swordqiqi

同问,怎么个不全法啊,我重新下了很多次了

biu0924 avatar Jul 13 '24 11:07 biu0924

image 模型要下载,一般来说直接git clone很快结束是不会下载完10多G的模型的

Stevenzwzhai avatar Jul 25 '24 07:07 Stevenzwzhai

本质是没有git lfs,安装一下就好了。https://git-lfs.com/
image

lixuancn avatar Aug 13 '24 06:08 lixuancn