ChatGLM-Instruct-Tuning icon indicating copy to clipboard operation
ChatGLM-Instruct-Tuning copied to clipboard

为什么我运行代码报错,ValueError: 150001 is not in list

Open LCK-Lin opened this issue 2 years ago • 8 comments

Traceback (most recent call last): File "run_clm.py", line 564, in main() File "run_clm.py", line 512, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/envs/GLM_instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1633, in train return inner_training_loop( File "/opt/conda/envs/GLM_instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/conda/envs/GLM_instruct/lib/python3.8/site-packages/transformers/trainer.py", line 2645, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/envs/GLM_instruct/lib/python3.8/site-packages/transformers/trainer.py", line 2677, in compute_loss outputs = model(**inputs) File "/opt/conda/envs/GLM_instruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/workspace/[email protected]/lck/ChatGLM-Instruct-Tuning/modeling_chatglm.py", line 1033, in forward transformer_outputs = self.transformer( File "/opt/conda/envs/GLM_instruct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/workspace/[email protected]/lck/ChatGLM-Instruct-Tuning/modeling_chatglm.py", line 836, in forward mask_position = seq.index(mask_token) ValueError: 150001 is not in list wandb: Waiting for W&B process to finish... (failed 1). wandb: You can sync this run to the cloud by running: wandb: wandb sync /workspace/[email protected]/lck/ChatGLM-Instruct-Tuning/wandb/offline-run-20230408_040222-i5caj5n7 wandb: Find logs at: ./wandb/offline-run-20230408_040222-i5caj5n7/logs

LCK-Lin avatar Apr 08 '23 04:04 LCK-Lin

same

changsha2999 avatar Apr 13 '23 23:04 changsha2999

same

这个是由于hugging Face的ChatGLM-6B代码有更新,把把最新的模型替换一下就可以了

LCK-Lin avatar Apr 14 '23 00:04 LCK-Lin

modify:ChatGLM-Instruct-Tuning/modeling_chatglm.py", line 836 to MASK, gMASK = 130000, 130001

changsha2999 avatar Apr 14 '23 01:04 changsha2999

modify:ChatGLM-Instruct-Tuning/modeling_chatglm.py", line 836 to MASK, gMASK = 130000, 130001 还有两个文件更新了,tokenization_chatglm.pytokenizer_config.json这两个替换了吗

LCK-Lin avatar Apr 14 '23 01:04 LCK-Lin

modify:ChatGLM-Instruct-Tuning/modeling_chatglm.py", line 836 to MASK, gMASK = 130000, 130001 还有两个文件更新了,tokenization_chatglm.pytokenizer_config.json这两个替换了吗

谢谢你的信息,这两个文件没有直接在tuning项目中引用,我估计已经直接使用了缓存中的最新的huggingface这两个文件。你那边运行成功了吗?我这边跑起来的时候一下就吃完了30G显存,把模型改成load_in_8bit则出各种问题,还未跑起来,请问有何建议,谢谢~

changsha2999 avatar Apr 14 '23 02:04 changsha2999

一样的问题

Hasen-dc avatar Apr 25 '23 08:04 Hasen-dc

130000, 130001

您这边成功运行了吗

YSLLYW avatar Apr 30 '23 08:04 YSLLYW

modify:ChatGLM-Instruct-Tuning/modeling_chatglm.py", line 836 to MASK, gMASK = 130000, 130001 还有两个文件更新了,tokenization_chatglm.pytokenizer_config.json这两个替换了吗 替换成最新的,微调还是报错:ValueError: 130004 is not in list

YSLLYW avatar Apr 30 '23 12:04 YSLLYW