Med-ChatGLM
Med-ChatGLM copied to clipboard
chatGLM2微调问题
[INFO|tokenization_utils_base.py:1800] 2023-08-02 20:41:21,905 >> loading file tokenizer.model
[INFO|tokenization_utils_base.py:1800] 2023-08-02 20:41:21,905 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:1800] 2023-08-02 20:41:21,905 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:1800] 2023-08-02 20:41:21,906 >> loading file tokenizer_config.json
[WARNING|modeling_utils.py:2092] 2023-08-02 20:41:21,939 >> The argument trust_remote_code
is to be used with Auto classes. It has no effect here and is ignored.
[INFO|modeling_utils.py:2400] 2023-08-02 20:41:21,940 >> loading weights file ./chatglm2-6b/pytorch_model.bin.index.json
[INFO|modeling_utils.py:2443] 2023-08-02 20:41:21,940 >> Will use torch_dtype=torch.float16 as defined in model's config object
[INFO|modeling_utils.py:1126] 2023-08-02 20:41:21,940 >> Instantiating ChatGLMForConditionalGeneration model under default dtype torch.float16.
[INFO|configuration_utils.py:575] 2023-08-02 20:41:21,941 >> Generate config GenerationConfig {
"_from_model_config": true,
"eos_token_id": 2,
"pad_token_id": 0,
"transformers_version": "4.27.1"
}
Traceback (most recent call last):
File "/data/Med-ChatGLM/run_clm.py", line 564, in
你这个应该是模型文件不匹配导致的 这是第一代的微调,你应该使用第一代的model文件