VisualGLM-6B
VisualGLM-6B copied to clipboard
NameError: name 'HackLinearNF4' is not defined
$sh finetune/finetune_visualglm_qlora.sh NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16666 --include localhost:0 --hostfile hostfile_single finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --layer_range 0 14 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 1 --gradient-accumulation-steps 4 --skip-init --fp16 --use_qlora [2023-06-01 20:35:04,581] [WARNING] [runner.py:191:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2023-06-01 20:35:04,698] [INFO] [runner.py:541:main] cmd = /opt/conda/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=16666 --enable_each_rank_log=None finetune_visualglm.py --experiment-name finetune-visualglm-6b --model-parallel-size 1 --mode finetune --train-iters 300 --resume-dataloader --max_source_length 64 --max_target_length 256 --lora_rank 10 --layer_range 0 14 --pre_seq_len 4 --train-data ./fewshot-data/dataset.json --valid-data ./fewshot-data/dataset.json --distributed-backend nccl --lr-decay-style cosine --warmup .02 --checkpoint-activations --save-interval 300 --eval-interval 10000 --save ./checkpoints --split 1 --eval-iters 10 --eval-batch-size 8 --zero-stage 1 --lr 0.0001 --batch-size 1 --gradient-accumulation-steps 4 --skip-init --fp16 --use_qlora [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 NCCL_DEBUG=info [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 NCCL_NET_GDR_LEVEL=2 [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 NCCL_IB_DISABLE=0 [2023-06-01 20:35:07,264] [INFO] [launch.py:222:main] 0 USE_NCCL=1 [2023-06-01 20:35:07,264] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [0]} [2023-06-01 20:35:07,264] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=1, node_rank=0 [2023-06-01 20:35:07,264] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]}) [2023-06-01 20:35:07,264] [INFO] [launch.py:247:main] dist_world_size=1 [2023-06-01 20:35:07,264] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=0
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
/opt/conda/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/taobao/java/jre/lib/amd64/server')}
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /opt/conda/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda116.so...
[2023-06-01 20:35:11,849] [WARNING] Failed to load bitsandbytes:cannot import name 'LinearNF4' from 'bitsandbytes.nn' (/opt/conda/lib/python3.8/site-packages/bitsandbytes/nn/init.py)
[2023-06-01 20:35:11,852] [INFO] using world size: 1 and model-parallel size: 1
[2023-06-01 20:35:11,852] [INFO] > padded vocab (size: 100) with 28 dummy tokens (new size: 128)
16666
[2023-06-01 20:35:11,853] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-06-01 20:35:11,854] [WARNING] [config_utils.py:69:_process_deprecated_field] Config parameter cpu_offload is deprecated use offload_optimizer instead
[2023-06-01 20:35:11,854] [INFO] [checkpointing.py:764:_configure_using_config_file] {'partition_activations': False, 'contiguous_memory_optimization': False, 'cpu_checkpointing': False, 'number_checkpoints': None, 'synchronize_checkpoint_boundary': False, 'profile': False}
[2023-06-01 20:35:11,854] [INFO] [checkpointing.py:226:model_parallel_cuda_manual_seed] > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234
[2023-06-01 20:35:11,912] [INFO] [RANK 0] building FineTuneVisualGLMModel model ...
/opt/conda/lib/python3.8/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
replacing layer 0 with lora
Traceback (most recent call last):
File "finetune_visualglm.py", line 179, in
https://github.com/THUDM/VisualGLM-6B/issues/85
另外,建议更新到最新的仓库代码,新的代码应该会有提示(也修复了一些bug)
pip install bitsandbytes --upgrade