PaddleOCR-VL 微调后的模型无法使用 FastDeploy 加载和推理
在 AI Studio 中微调了一个 PaddleOCR-VL 模型,参考 https://aistudio.baidu.com/projectdetail/9857242
使用如下命令进行部署,报错
aistudio@jupyter-942478-9857242:~$ python -m fastdeploy.entrypoints.openai.api_server \
> --model /home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT \
> --port 8185 \
> --metrics-port 8186 \
> --engine-worker-queue-port 8187 \
> --max-model-len 16384 \
> --max-num-batched-tokens 16384 \
> --gpu-memory-utilization 0.7 \
> --max-num-seqs 256
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:718: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
INFO 2025-12-12 12:54:19,655 43112 api_server.py[line:86] Number of api-server workers: 1.
[2025-12-12 12:54:19,665] [ INFO] - Using download source: huggingface
[2025-12-12 12:54:19,666] [ INFO] - Loading configuration file /home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT/config.json
[2025-12-12 12:54:19,666] [ WARNING] - You are using a model of type paddleocr_vl to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
[2025-12-12 12:54:19,667] [ WARNING] - You are using a model of type paddleocr_vl to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
/home/aistudio/external-libraries/lib/python3.10/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils. Support for replacing an already imported distutils is deprecated. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
warnings.warn(
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddleslim/common/load_model.py:20: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
import pkg_resources as pkg
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/model_executor/graph_optimization/utils.py:21: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[2025-12-12 12:54:22,422] [ INFO] - Using download source: huggingface
[2025-12-12 12:54:22,423] [ INFO] - Loading configuration file /home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT/generation_config.json
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/preprocess.py", line 73, in create_processor
Processor = load_input_processor_plugins()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/plugins/input_processor/__init__.py", line 26, in load_input_processor_plugins
assert len(plugins) == 1, "Only one plugin is allowed to be loaded."
AssertionError: Only one plugin is allowed to be loaded.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/entrypoints/openai/api_server.py", line 715, in <module>
main()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/entrypoints/openai/api_server.py", line 698, in main
if not load_engine():
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/entrypoints/openai/api_server.py", line 124, in load_engine
if not engine.start(api_server_pid=args.port):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/engine/engine.py", line 134, in start
self.engine.create_data_processor()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/engine/common_engine.py", line 161, in create_data_processor
self.data_processor = self.input_processor.create_processor()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/preprocess.py", line 116, in create_processor
self.processor = PaddleOCRVLProcessor(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/paddleocr_vl_processor/paddleocr_vl_processor.py", line 65, in __init__
super().__init__(model_name_or_path, reasoning_parser_obj, tool_parser_obj)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/text_processor.py", line 179, in __init__
self.tokenizer = self._load_tokenizer()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/text_processor.py", line 628, in _load_tokenizer
return AutoTokenizer.from_pretrained(self.model_name_or_path, padding_side="left", use_fast=True)
File "/home/aistudio/external-libraries/lib/python3.10/site-packages/paddleformers/transformers/auto/tokenizer.py", line 329, in from_pretrained
raise ValueError(
ValueError: Tokenizer class Ernie4_5_Tokenizer does not exist or is not currently imported.
/opt/conda/envs/python35-paddle120-env/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
尝试使用 python 脚本进行推理,同样错误
from fastdeploy.entrypoints.llm import LLM
# 加载模型
llm = LLM(model="/home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT")
outputs = llm.chat(
messages=[
{"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://ai-studio-static-online.cdn.bcebos.com/dc31c334d4664ca4955aa47d8e202a53a276fd0aab0840b09abe953fe51207d0"}},
{"type": "text", "text": "OCR:{}"}]}
],
chat_template_kwargs={"enable_thinking": False})
# 输出结果
for output in outputs:
prompt = output.prompt
generated_text = output.outputs.text
reasoning_text = output.outputs.reasoning_content
输出
[2025-12-12 12:58:45,335] [ INFO] - Using download source: huggingface
[2025-12-12 12:58:45,336] [ INFO] - Loading configuration file /home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT/config.json
[2025-12-12 12:58:45,337] [ WARNING] - You are using a model of type paddleocr_vl to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
[2025-12-12 12:58:45,338] [ WARNING] - You are using a model of type paddleocr_vl to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
[2025-12-12 12:58:45,420] [ INFO] - Using download source: huggingface
[2025-12-12 12:58:45,422] [ INFO] - Loading configuration file /home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT/generation_config.json
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/preprocess.py:73, in InputPreprocessor.create_processor(self)
71 from fastdeploy.plugins.input_processor import load_input_processor_plugins
---> 73 Processor = load_input_processor_plugins()
74 self.processor = Processor(
75 model_name_or_path=self.model_name_or_path,
76 reasoning_parser_obj=reasoning_parser_obj,
77 tool_parser_obj=tool_parser_obj,
78 )
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/plugins/input_processor/__init__.py:26, in load_input_processor_plugins()
25 plugins = load_plugins_by_group(group=PLUGINS_GROUP)
---> 26 assert len(plugins) == 1, "Only one plugin is allowed to be loaded."
27 return next(iter(plugins.values()))()
AssertionError: Only one plugin is allowed to be loaded.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[5], line 3
1 from fastdeploy.entrypoints.llm import LLM
2 # 加载模型
----> 3 llm = LLM(model="/home/aistudio/paddleocr_vl/PaddleOCR-VL-SFT")
5 outputs = llm.chat(
6 messages=[
7 {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://ai-studio-static-online.cdn.bcebos.com/dc31c334d4664ca4955aa47d8e202a53a276fd0aab0840b09abe953fe51207d0"}},
8 {"type": "text", "text": "OCR:{}"}]}
9 ],
10 chat_template_kwargs={"enable_thinking": False})
12 # 输出结果
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/entrypoints/llm.py:100, in LLM.__init__(self, model, revision, tokenizer, enable_logprob, chat_template, **kwargs)
96 self.llm_engine = LLMEngine.from_engine_args(engine_args=engine_args)
98 self.default_sampling_params = SamplingParams(max_tokens=self.llm_engine.cfg.model_config.max_model_len)
--> 100 self.llm_engine.start()
102 self.mutex = threading.Lock()
103 self.req_output = dict()
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/engine/engine.py:134, in LLMEngine.start(self, api_server_pid)
131 self.launch_components()
133 self.engine.start()
--> 134 self.engine.create_data_processor()
135 self.data_processor = self.engine.data_processor
137 # If block numer is specified and model is deployed in mixed mode, start cache manager first
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/engine/common_engine.py:161, in EngineService.create_data_processor(self)
153 def create_data_processor(self):
154 self.input_processor = InputPreprocessor(
155 self.cfg.model_config,
156 self.cfg.structured_outputs_config.reasoning_parser,
(...)
159 self.cfg.tool_parser,
160 )
--> 161 self.data_processor = self.input_processor.create_processor()
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/preprocess.py:116, in InputPreprocessor.create_processor(self)
111 elif "PaddleOCRVL" in architecture:
112 from fastdeploy.input.paddleocr_vl_processor import (
113 PaddleOCRVLProcessor,
114 )
--> 116 self.processor = PaddleOCRVLProcessor(
117 config=self.model_config,
118 model_name_or_path=self.model_name_or_path,
119 limit_mm_per_prompt=self.limit_mm_per_prompt,
120 mm_processor_kwargs=self.mm_processor_kwargs,
121 reasoning_parser_obj=reasoning_parser_obj,
122 )
123 elif "PaddleOCRVL" in architecture:
124 from fastdeploy.input.paddleocr_vl_processor import (
125 PaddleOCRVLProcessor,
126 )
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/paddleocr_vl_processor/paddleocr_vl_processor.py:65, in PaddleOCRVLProcessor.__init__(self, config, model_name_or_path, limit_mm_per_prompt, mm_processor_kwargs, reasoning_parser_obj, tool_parser_obj, enable_processor_cache)
44 def __init__(
45 self,
46 config,
(...)
52 enable_processor_cache=False,
53 ):
54 """
55 Initialize PaddleOCRVLProcessor instance.
56
(...)
63 tool_parser_obj: Tool parser instance
64 """
---> 65 super().__init__(model_name_or_path, reasoning_parser_obj, tool_parser_obj)
66 data_processor_logger.info(f"model_name_or_path: {model_name_or_path}")
67 processor_kwargs = self._parse_processor_kwargs(mm_processor_kwargs)
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/text_processor.py:179, in DataProcessor.__init__(self, model_name_or_path, reasoning_parser_obj, tool_parser_obj)
177 self.decode_status = dict()
178 self.tool_parser_dict = dict()
--> 179 self.tokenizer = self._load_tokenizer()
180 data_processor_logger.info(
181 f"tokenizer information: bos_token is {self.tokenizer.bos_token}, {self.tokenizer.bos_token_id}, \
182 eos_token is {self.tokenizer.eos_token}, {self.tokenizer.eos_token_id} "
183 )
185 from paddleformers.trl.llm_utils import get_eos_token_id
File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/fastdeploy/input/text_processor.py:628, in DataProcessor._load_tokenizer(self)
625 else:
626 from paddleformers.transformers import AutoTokenizer
--> 628 return AutoTokenizer.from_pretrained(self.model_name_or_path, padding_side="left", use_fast=True)
File ~/external-libraries/lib/python3.10/site-packages/paddleformers/transformers/auto/tokenizer.py:329, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
327 tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)
328 if tokenizer_class is None:
--> 329 raise ValueError(
330 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
331 )
333 # Bind PaddleTokenizerMixin
334 tokenizer_class = _bind_paddle_mixin_if_available(tokenizer_class)
ValueError: Tokenizer class Ernie4_5_Tokenizer does not exist or is not currently imported.
@jzhang533
这是缺一些文件,可以拉 hf 的 repo,其他文件保持不变,只用你微调后的 safetensors 替换一下原来的 safetensors 就可以运行了。
收到 ~ 感谢!
@zhang-prog 还是有点问题
hf 那边是 processing_paddleocr_vl.py,这边提示缺少 processing_ppocrvl.py ~
手动把文件名改过来之后,又提示错误
(venv310) ✘ shun@shun-B660M-Pro-RS ~/workspace/Projects/erniekit_paddleocr_vl_ner master ±✚ python paddleocr_vl_transformers.py
The repository /media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt contains custom code which must be executed to correctly load the model. You can inspect the repository content at /media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt .
You can inspect the repository content at https://hf.co//media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt.
You can avoid this prompt in future by passing the argument `trust_remote_code=True`.
Do you wish to run the custom code? [y/N] y
Traceback (most recent call last):
File "/home/shun/workspace/Projects/erniekit_paddleocr_vl_ner/paddleocr_vl_transformers.py", line 22, in <module>
processor = AutoProcessor.from_pretrained("/media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt")
File "/home/shun/workspace/Projects/github/transformers/src/transformers/models/auto/processing_auto.py", line 382, in from_pretrained
processor_class = get_class_from_dynamic_module(
File "/home/shun/workspace/Projects/github/transformers/src/transformers/dynamic_module_utils.py", line 583, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module, force_reload=force_download)
File "/home/shun/workspace/Projects/github/transformers/src/transformers/dynamic_module_utils.py", line 311, in get_class_in_module
return getattr(module, class_name)
AttributeError: module 'transformers_modules.PaddleOCR_VL_Prompt.processing_ppocrvl' has no attribute 'PPOCRVLProcessor'. Did you mean: 'PaddleOCRVLProcessor'?
是不是 tokenizer_config.json 中的配置错了
"auto_map": {
"AutoProcessor": "processing_ppocrvl.PPOCRVLProcessor"
},
改了之后还是有错误 🫠
(venv310) shun@shun-B660M-Pro-RS ~/workspace/Projects/erniekit_paddleocr_vl_ner master ±✚ python paddleocr_vl_transformers.py
The repository /media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt contains custom code which must be executed to correctly load the model. You can inspect the repository content at /media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt .
You can inspect the repository content at https://hf.co//media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt.
You can avoid this prompt in future by passing the argument `trust_remote_code=True`.
Do you wish to run the custom code? [y/N] y
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Traceback (most recent call last):
File "/home/shun/workspace/Projects/erniekit_paddleocr_vl_ner/paddleocr_vl_transformers.py", line 22, in <module>
processor = AutoProcessor.from_pretrained("/media/shun/bigdata/Models/PaddleOCR_VL_SFT/PaddleOCR_VL_Prompt")
File "/home/shun/workspace/Projects/github/transformers/src/transformers/models/auto/processing_auto.py", line 387, in from_pretrained
return processor_class.from_pretrained(
File "/home/shun/workspace/Projects/github/transformers/src/transformers/processing_utils.py", line 1396, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/shun/workspace/Projects/github/transformers/src/transformers/processing_utils.py", line 1482, in _get_arguments_from_pretrained
sub_processor = auto_processor_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/shun/workspace/Projects/github/transformers/src/transformers/models/auto/tokenization_auto.py", line 736, in from_pretrained
if tokenizer_class.__name__ == "PythonBackend": # unless you inherit from it?
AttributeError: 'NoneType' object has no attribute '__name__'. Did you mean: '__ne__'?
请确认是否拉的最新代码呢?贴一下config.json的内容我看看~
transformers 是最新的代码 ~ 微调后生成的模型文件是不是有点问题?
根据 https://github.com/PaddlePaddle/ERNIE/blob/release/v1.4/docs/paddleocr_vl_sft_zh.md 里面用的是 git clone https://github.com/PaddlePaddle/ERNIE -b release/v1.4 这个版本 ~
微调后生成的模型文件中 config.json
{
"architectures": [
"PaddleOCRVLForConditionalGeneration"
],
"attention_probs_dropout_prob": 0.0,
"auto_map": {
"AutoConfig": "configuration_paddleocr_vl.PaddleOCRVLConfig",
"AutoModel": "modeling_paddleocr_vl.PaddleOCRVLForConditionalGeneration",
"AutoModelForCausalLM": "modeling_paddleocr_vl.PaddleOCRVLForConditionalGeneration"
},
"compression_ratio": 1.0,
"disable_pipeline_warmup": false,
"enable_mtp_magic_send": false,
"fp16_opt_level": "O2",
"freq_allocation": 20,
"fuse_ln": false,
"fuse_rms_norm": true,
"head_dim": 128,
"hidden_act": "silu",
"hidden_dropout_prob": 0.0,
"hidden_size": 1024,
"ignored_index": -100,
"im_patch_id": 100295,
"image_token_id": 100295,
"intermediate_size": 3072,
"max_position_embeddings": 131072,
"max_text_id": 100295,
"model_type": "paddleocr_vl",
"moe_dropout_prob": 0.0,
"moe_multimodal_dispatch_use_allgather": "v2-alltoall-unpad",
"num_attention_heads": 16,
"num_hidden_layers": 18,
"num_key_value_heads": 2,
"paddleformers_version": "0.3.2",
"pixel_hidden_size": 1152,
"rms_norm_eps": 1e-05,
"rope_is_neox_style": true,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 500000,
"scale_qk_coeff": 1.0,
"seqlen": 16384,
"sliding_window": null,
"tie_word_embeddings": false,
"token_balance_loss": false,
"token_balance_seqlen": 16384,
"torch_dtype": "bfloat16",
"use_3d_rope": true,
"use_bias": false,
"use_flash_attn_with_mask": true,
"use_fp8": false,
"use_mem_eff_attn": true,
"use_recompute_moe": false,
"use_rmsnorm": true,
"video_token_id": 101307,
"vision_config": {
"_attn_implementation": "eager",
"_name_or_path": "",
"_save_to_hf": false,
"add_cross_attention": false,
"add_tail_layers": 0,
"architectures": [
"SiglipVisionModel"
],
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_paddleocr_vl.PaddleOCRVLConfig",
"AutoModel": "modeling_paddleocr_vl.SiglipVisionModel"
},
"bad_words_ids": null,
"begin_suppress_tokens": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"classifier_dropout": null,
"context_parallel_degree": 1,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dpo_config": null,
"dtype": "bfloat16",
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"fuse_attention_ffn": false,
"fuse_attention_qkv": false,
"fuse_linear": false,
"fuse_rope": false,
"fuse_sequence_parallel_allreduce": false,
"fuse_swiglu": false,
"hidden_act": "gelu_new",
"hidden_size": 1152,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"image_size": 384,
"intermediate_size": 4304,
"is_decoder": false,
"is_encoder_decoder": false,
"kto_config": null,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-06,
"length_penalty": 1.0,
"loss_subbatch_sequence_length": -1,
"max_length": 20,
"min_length": 0,
"model_type": "paddleocr_vl",
"moe_subbatch_token_num": 0,
"no_recompute_layers": null,
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_channels": 3,
"num_choices": null,
"num_hidden_layers": 27,
"num_return_sequences": 1,
"offload_recompute_inputs": false,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"patch_size": 14,
"pipeline_parallel_degree": 1,
"pp_recompute_interval": 1,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"quantization_config": {
"act_quant_method": "abs_max",
"activation_scheme": null,
"actscale_moving_rate": 0.01,
"apply_hadamard": false,
"apply_online_actscale_step": 200,
"dense_quant_type": "",
"dtype": null,
"fmt": null,
"fp8_format_type": "hybrid",
"group_size": -1,
"hadamard_block_size": 32,
"ignore_modules": null,
"llm_int8_threshold": 6.0,
"moe_quant_type": "",
"qlora_weight_blocksize": 64,
"qlora_weight_double_quant": false,
"qlora_weight_double_quant_block_size": 256,
"quant_input_grad": false,
"quant_method": null,
"quant_round_type": 0,
"quant_type": null,
"quant_weight_grad": false,
"quantization": "",
"scale_epsilon": 1e-08,
"shift": false,
"shift_smooth_all_linears": false,
"smooth": false,
"weight_block_size": null,
"weight_quant_method": "abs_max_channel_wise",
"weight_quantize_algo": null
},
"recompute": true,
"recompute_granularity": "full",
"recompute_use_reentrant": false,
"refined_recompute": "",
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": false,
"return_dict_in_generate": false,
"sep_parallel_degree": 1,
"sep_token_id": null,
"sequence_parallel": false,
"spatial_merge_size": 2,
"suppress_tokens": null,
"task_specific_params": null,
"temperature": 1.0,
"temporal_patch_size": 2,
"tensor_parallel_degree": 1,
"tensor_parallel_output": true,
"tensor_parallel_rank": 0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"tokens_per_second": 2,
"top_k": 50,
"top_p": 1.0,
"typical_p": 1.0,
"use_cache": false,
"use_filtered_label_loss": false,
"use_flash_attention": true,
"use_fused_dropout_add": false,
"use_fused_head_and_loss_fn": false,
"use_fused_linear": false,
"use_fused_linear_cross_entropy": false,
"use_fused_rms_norm": false,
"use_fused_rope": false,
"use_sparse_flash_attn": true,
"use_sparse_head_and_loss_fn": false,
"virtual_pp_degree": 1
},
"vision_start_token_id": 101305,
"vocab_size": 103424,
"weight_share_add_bias": true
}
这是缺一些文件,可以拉 hf 的 repo,其他文件保持不变,只用你微调后的 safetensors 替换一下原来的 safetensors 就可以运行了。
您是不是只是copy了一些py文件到微调目录下了呢?这里可能我表达有误,我的意思是全部的文件(包括config.json)都用huggingface上的,也就是这里: https://huggingface.co/PaddlePaddle/PaddleOCR-VL/tree/main
然后只是替换其中的 safetensors,也就是说只需要用到您微调之后的safetensors文件。
这是缺一些文件,可以拉 hf 的 repo,其他文件保持不变,只用你微调后的 safetensors 替换一下原来的 safetensors 就可以运行了。
您是不是只是copy了一些py文件到微调目录下了呢?这里可能我表达有误,我的意思是全部的文件(包括config.json)都用huggingface上的,也就是这里: https://huggingface.co/PaddlePaddle/PaddleOCR-VL/tree/main
然后只是替换其中的 safetensors,也就是说只需要用到您微调之后的safetensors文件。
收到 ~
我之前的模型文件是从 modelscope https://modelscope.cn/models/PaddlePaddle/PaddleOCR-VL/files 下载的(速度快),这么看,那边的模型好像是旧的 ... ...
好的,这块儿我反应相关同学更新下~