VisualGLM-6B
VisualGLM-6B copied to clipboard
请问一下微调的时候出现这个怎么处理“ModuleNotFoundError: No module named 'sat.model.finetune.lora_mixin'”,学生党租的算力平台,不太懂部署这一块有没有做对
root@autodl-container-9ed51187fa-29111776:~/autodl-tmp# bash finetune/finetune_visualglm.sh
NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16666 --hostfile hostfile_single finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 20 --skip-init --fp16 --use_lora
Setting ds_accelerator to cuda (auto detect)
[2023-06-08 15:58:22,720] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-06-08 15:58:22,766] [INFO] [runner.py:555:main] cmd = /root/miniconda3/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=16666 --enable_each_rank_log=None finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 20 --skip-init --fp16 --use_lora
Setting ds_accelerator to cuda (auto detect)
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NCCL_IB_DISABLE=0
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NCCL_DEBUG=info
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NCCL_NET_GDR_LEVEL=2
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.9.6-1+cuda11.3
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.9.6-1
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NCCL_P2P_DISABLE=1
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NCCL_VERSION=2.9.6-1
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.9.6-1+cuda11.3
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2
[2023-06-08 15:58:24,247] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_VERSION=2.9.6-1
[2023-06-08 15:58:24,248] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]}
[2023-06-08 15:58:24,248] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0
[2023-06-08 15:58:24,248] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
[2023-06-08 15:58:24,248] [INFO] [launch.py:163:main] dist_world_size=1
[2023-06-08 15:58:24,248] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0
Setting ds_accelerator to cuda (auto detect)
Traceback (most recent call last):
File "finetune_visualglm.py", line 9, in
运行 cli_demo.py 也会出现 “ModuleNotFoundError: No module named 'sat.model.finetune.lora_mixin'”这个问题
你对代码做修改了吗,代码里是直接from lora_mixin import LoraMixin
的,没有用到sat
另外sat里的lora文件改名为lora2了,from sat.model.finetune.lora2 import LoraMixin
另外sat里的lora文件改名为lora2了,
from sat.model.finetune.lora2 import LoraMixin
感谢您的帮助,我重新拉了一下git就有了,但是目前又遇到一个很难受的问题,微调下载的东西在root/.cache里,但是我那个只有14g,而那个包解压后有15.6G,想知道如何让微调下载的包能在python代码里改路径,谢谢大佬啦!
sat的cache应该在/root/.sat_models目录下,可以通过设置SAT_HOME=/your/dir
来改变下载路径,你说的root/.cache应该是huggingface的cache吧,可是微调并不支持huggingface,我记得huggingface也有类似的环境变量
SAT_HOME=
请问一下这个SAT_HOME=参数是要在哪个python脚本里修改,是加到哪行cmd命令里么
你说的root/.cache应该是huggingface的cache吧
类似的~ 我是下载在root后解压不开来,然后挪开后去解压
SAT_HOME=/your/dir python run.py
或者
SAT_HOME=/your/dir bash script.sh
SAT_HOME=/your/dir python run.py 或者 SAT_HOME=/your/dir bash script.sh
好像还是不太行 刚才试了一下, emmm想请问一下 他去下载visualglm-6b.zip之后,是解压到了哪个目录吖,我尝试在root那个盘删点东西 然后把解压后的挪进去看看
我在 bash finetune/finetune_visualglm.sh 的时候他会给我下载 visualglm-6b.zip(13.4G),然后会进行解压,解压的时候就显示空间不足了,现在我的处理是把这个zip挪另一个盘,解压后14.5G,有个叫visualglm-6b的文件夹,这个文件夹要挪到哪里去可以被它训练的时候读取到呢,谢谢老哥啦!
sat的cache应该在/root/.sat_models目录下,可以通过设置
SAT_HOME=/your/dir
来改变下载路径,你说的root/.cache应该是huggingface的cache吧,可是微调并不支持huggingface,我记得huggingface也有类似的环境变量
能行了!~ 我挪到这里就加载起来了/root/.sat_models/visualglm-6b/1/mp_rank_00_model_states.pt 但是又有新的bug了root@autodl-container-9ed51187fa-29111776:~/autodl-tmp/VisualGLM-6B# bash finetune/finetune_visualglm.sh NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16666 --hostfile hostfile_single finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --layer_range 0 14 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 4 --skip-init --fp16 --use_lora Setting ds_accelerator to cuda (auto detect) [2023-06-08 17:59:54,926] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2023-06-08 17:59:54,995] [INFO] [runner.py:555:main] cmd = /root/miniconda3/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=16666 --enable_each_rank_log=None finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --layer_range 0 14 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 4 --skip-init --fp16 --use_lora Setting ds_accelerator to cuda (auto detect) [2023-06-08 17:59:56,575] [INFO] [launch.py:138:main] 0 NCCL_IB_DISABLE=0 [2023-06-08 17:59:56,575] [INFO] [launch.py:138:main] 0 NCCL_DEBUG=info [2023-06-08 17:59:56,575] [INFO] [launch.py:138:main] 0 NCCL_NET_GDR_LEVEL=2 [2023-06-08 17:59:56,575] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.9.6-1+cuda11.3 [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.9.6-1 [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NCCL_P2P_DISABLE=1 [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NCCL_VERSION=2.9.6-1 [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.9.6-1+cuda11.3 [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2 [2023-06-08 17:59:56,576] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_VERSION=2.9.6-1 [2023-06-08 17:59:56,576] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]} [2023-06-08 17:59:56,576] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0 [2023-06-08 17:59:56,576] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]}) [2023-06-08 17:59:56,576] [INFO] [launch.py:163:main] dist_world_size=1 [2023-06-08 17:59:56,576] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0 Setting ds_accelerator to cuda (auto detect)
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /root/miniconda3/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda113.so
/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib64'), PosixPath('/usr/local/nvidia/lib')}
warn(msg)
/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('//autodl-container-9ed51187fa-29111776'), PosixPath('8888/jupyter')}
warn(msg)
/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('Asia/Shanghai')}
warn(msg)
/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64'), PosixPath('https')}
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get CUDA error: invalid device function
errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /root/miniconda3/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
[2023-06-08 17:59:59,712] [INFO] using world size: 1 and model-parallel size: 1
[2023-06-08 17:59:59,712] [INFO] > padded vocab (size: 100) with 28 dummy tokens (new size: 128)
[2023-06-08 17:59:59,715] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-06-08 17:59:59,716] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-06-08 17:59:59,716] [INFO] [comm.py:594:init_distributed] cdb=None
[2023-06-08 17:59:59,717] [WARNING] [config_utils.py:69:_process_deprecated_field] Config parameter cpu_offload is deprecated use offload_optimizer instead
[2023-06-08 17:59:59,718] [INFO] [checkpointing.py:764:_configure_using_config_file] {'partition_activations': False, 'contiguous_memory_optimization': False, 'cpu_checkpointing': False, 'number_checkpoints': None, 'synchronize_checkpoint_boundary': False, 'profile': False}
[2023-06-08 17:59:59,719] [INFO] [checkpointing.py:226:model_parallel_cuda_manual_seed] > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
[2023-06-08 17:59:59,719] [INFO] [RANK 0] building FineTuneVisualGLMModel model ...
/root/miniconda3/lib/python3.8/site-packages/torch/nn/init.py:403: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
replacing layer 0 attention with lora
replacing layer 14 attention with lora
[2023-06-08 18:00:13,752] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 7811237376
[2023-06-08 18:00:15,228] [INFO] [RANK 0] global rank 0 is loading checkpoint /root/.sat_models/visualglm-6b/1/mp_rank_00_model_states.pt
[2023-06-08 18:00:30,189] [INFO] [RANK 0] > successfully loaded /root/.sat_models/visualglm-6b/1/mp_rank_00_model_states.pt
[2023-06-08 18:00:51,479] [INFO] [RANK 0] Try to load tokenizer from Huggingface transformers...
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 441/441 [00:00<00:00, 75.4kB/s]
Downloading (…)enization_chatglm.py: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.0k/17.0k [00:00<00:00, 9.24MB/s]
A new version of the following files was downloaded from https://huggingface.co/THUDM/chatglm-6b:
- tokenization_chatglm.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
[2023-06-08 18:00:53,191] [INFO] [RANK 0] Cannot find THUDM/chatglm-6b from Huggingface or sat. Creating a fake tokenizer...
Traceback (most recent call last):
File "finetune_visualglm.py", line 195, in
training_main(args, model_cls=model, forward_step_function=forward_step, create_dataset_function=create_dataset_function, collate_fn=data_collator) File "/root/miniconda3/lib/python3.8/site-packages/sat/training/deepspeed_training.py", line 67, in training_main train_data, val_data, test_data = make_loaders(args, hooks['create_dataset_function'], collate_fn=collate_fn) File "/root/miniconda3/lib/python3.8/site-packages/sat/data_utils/configure_data.py", line 197, in make_loaders train = make_dataset(**data_set_args, args=args, dataset_weights=args.train_data_weights, is_train_data=True) File "/root/miniconda3/lib/python3.8/site-packages/sat/data_utils/configure_data.py", line 124, in make_dataset_full d = create_dataset_function(p, args) File "finetune_visualglm.py", line 161, in create_dataset_function dataset = FewShotDataset(path, image_processor, tokenizer, args) File "finetune_visualglm.py", line 119, in init input0 = tokenizer.encode(" ", add_special_tokens=False) AttributeError: 'FakeTokenizer' object has no attribute 'encode' [2023-06-08 18:00:55,646] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 383 [2023-06-08 18:00:55,646] [ERROR] [launch.py:320:sigkill_handler] ['/root/miniconda3/bin/python', '-u', 'finetune_visualglm.py', '--local_rank=0', '--experiment-name', 'finetune-visualglm-6b', '--model-parallel-size', '1', '--mode', 'finetune', '--train-iters', '300', '--resume-dataloader', '--max_source_length', '64', '--max_target_length', '256', '--lora_rank', '10', '--layer_range', '0', '14', '--pre_seq_len', '4', '--train-data', './fewshot-data/dataset.json', '--valid-data', './fewshot-data/dataset.json', '--distributed-backend', 'nccl', '--lr-decay-style', 'cosine', '--warmup', '.02', '--checkpoint-activations', '--save-interval', '300', '--eval-interval', '10000', '--save', './checkpoints', '--split', '1', '--eval-iters', '10', '--eval-batch-size', '8', '--zero-stage', '1', '--lr', '0.0001', '--batch-size', '4', '--skip-init', '--fp16', '--use_lora'] exits with return code = 1 root@autodl-container-9ed51187fa-29111776:~/autodl-tmp/VisualGLM-6B#
这个问题应该是由于我把 tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 改成了 tokenizer = AutoTokenizer.from_pretrained("models--THUDM--visualglm-6b/snapshots/f4f759acde0926fefcd35e2c626e08adb452eff8", trust_remote_code=True)造成的
修改回去就好了
现在应该是在微调中吧?
不能windows微调还是太不方便了_(:3_/ 没一会儿我另一个盘又满了 唉
windows是可以微调的,只要安装deepspeed
感谢老哥的帮助! 昨天在autoDL上成功跑通了微调,也跑出模型来了,20多张 300步 lora微调 用A100大概微调15分钟左右能好,就是还没调出预期的想法,还在犹豫到底是增加训练样本对呢还是 增加步数_(:3_/ 非常感谢老哥的帮助啦!接下来想去研究一下怎么在windows部署deepspeed