CodeUp
CodeUp copied to clipboard
Issue running the codeup with WSL
python3 finetune.py --base_model='TheBloke/Dolphin-Llama2-7B-GPTQ' --data_path='data/codeup_19k.json' --num_epochs=10 --cutoff_len=512 --group_by_length --output_dir='./test-llama-2/7b' --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' --lora_r=16 --micro_batch_size
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/raymond/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 121 CUDA SETUP: Loading binary /home/raymond/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so... Training Alpaca-LoRA model with params: base_model: TheBloke/Dolphin-Llama2-7B-GPTQ data_path: data/codeup_19k.json output_dir: ./test-llama-2/7b batch_size: 128 micro_batch_size: True num_epochs: 10 learning_rate: 0.0003 cutoff_len: 512 val_set_size: 2000 lora_r: 16 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj'] train_on_inputs: True add_eos_token: False group_by_length: True wandb_project: wandb_run_name: wandb_watch: wandb_log_model: resume_from_checkpoint: False prompt template: alpaca
Traceback (most recent call last):
File "/home/raymond/CodeUp/finetune.py", line 283, in
CUDA version 12.1 and other pip things followed requirements.txt
Do you have any idea ?