ChatGLM-Finetuning icon indicating copy to clipboard operation
ChatGLM-Finetuning copied to clipboard

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

Results 67 ChatGLM-Finetuning issues
Sort by recently updated
recently updated
newest added

input_ids = [tokenizer.get_command("[gMASK]"), tokenizer.get_command("sop")] + tokenizer.convert_tokens_to_ids(tokens)请问这行是什么意思,为什么和chatglm版本差别挺大的,为什么可以以现在这种格式写呢?

root@VM-11-20-ubuntu:/home/jerry/ChatGLM-Finetuning# deepspeed predict_pt.py --model_dir /home/jerry/ChatGLM-Finetuning/output_dir_pt_20/global_step-3600/ [2023-08-17 20:52:59,552] [WARNING] [runner.py:186:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2023-08-17 20:52:59,561] [INFO] [runner.py:548:main] cmd = /usr/bin/python3 -u -m...

之前用python直接训练没用出现过这种情况,这是第一次用deepspeed框架训练 freeze训练,glm2微调最后三层 使用一张3090显卡,参数设置如下 --per_device_train_batch_size 4 \ --max_len 512 \ --max_src_len 256 \ --learning_rate 1e-4 \ --weight_decay 0.1 \ --num_train_epochs 1 \ --gradient_accumulation_steps 16 \ --warmup_ratio 0.1 \ 莫名奇妙就会出现显卡停了的情况: 输入nvidia-smi 出现Unable...

大佬想咨询一下 ChatGLM 是不是 有27层 可以进行lora微调,那具体每一层是做什么的呢? 机器学习小白,求指教

Traceback (most recent call last): File "/root/ChatGLM-Finetuning/train.py", line 234, in main() File "/root/ChatGLM-Finetuning/train.py", line 96, in main tokenizer = MODE[args.mode]["tokenizer"].from_pretrained(args.model_name_or_path) File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1813, in from_pretrained resolved_vocab_files[file_id] = cached_file( File...

刘老师您好,请问我运行单卡正常,但运行多卡的时候,无论是lora还是ptuning都会报下面这个错误,是什么原因呢: 盼回复,感谢🙏