Wei xuyang
Results
1
comments of
Wei xuyang
You can just add --lora_path arguments ```shell CUDA_VISIBLE_DEVICES=0 \ python main.py mmlu --model_name llama \ --model_path /path/to/basemodel \ --lora_path /path/to/loraweight ``` if you change lora model, just export PYTHONPATH="$PYTHONPATH:/path/contains/changed-transformers-peft"