ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[BUG/Help] 使用main.py进行全量微调训练后,输出一直为空

Open lmx760581375 opened this issue 1 year ago • 0 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Current Behavior

from transformers import AutoModel, AutoTokenizer import sys

local_model_path = "my_local_checkpoint_path"

model_path = "THUDM/chatglm-6b" # You can modify the path for storing the local model

model = AutoModel.from_pretrained(local_model_path, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(local_model_path, trust_remote_code=True) response, history = model.chat(tokenizer, "你好", history=[]) print("\033[1;36m" + "chatGLM:{}".format(response) + "\033[0m")

line = input("Human:") while line: response, history = model.chat(tokenizer, str(line), history=[]) print("\033[1;36m"+"chatGLM:{}".format(response)+"\033[0m") # print('\033[42m'+"chatGLM:{}"+'\033[42m'.format(response)) line = input("Human:") # print(line)

我使用main.py进行了全量微调,重新加载模型发现直接不输出了,什么输出都为空。

Expected Behavior

No response

Steps To Reproduce

使用了main.py在我自己的数据集上进行了训练,使用trainer.save_model()保存了训练后的模型

Environment

- OS:centos7
- Python:3.8.13
- Transformers:4.28.0.dev0
- PyTorch: 1.13.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : 11.7

Anything else?

No response

lmx760581375 avatar Apr 26 '23 11:04 lmx760581375