llama-recipes icon indicating copy to clipboard operation
llama-recipes copied to clipboard

IImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)

Open Tizzzzy opened this issue 9 months ago • 1 comments

System Info

  1. python: 3.10.12
  2. nvcc:
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
  1. peft: 0.10.0

All other packages are the same as the requirments.txt

Information

  • [X] The official example scripts
  • [X] My own modified scripts

🐛 Describe the bug

I am new to llama-recipe, I am trying to finetune llama3 on a huggingface dataset "openbookqa". I used this command to run: python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing". However I got this error:

(llama3) root@Dong:/mnt/c/Users/super/OneDrive/Desktop/research/llama-recipes# python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 11, in <module>
    from peft import get_peft_model, prepare_model_for_int8_training
ImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)

I followed the README instruction. I git clone the repo. I did pip install llama-recipes. I also did pip install -r requirements.txt

I did some research on this error, and some people said prepare_model_for_int8_training has been deprecated for quite some time, with PEFT v0.10.0, use prepare_model_for_kbit_training instead.

However, if this is the case, I don't know which file I need to change.

Error logs

(llama3) root@Dong:/mnt/c/Users/super/OneDrive/Desktop/research/llama-recipes# python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 11, in <module>
    from peft import get_peft_model, prepare_model_for_int8_training
ImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)  

Expected behavior

I expect I can finetune llama3

Tizzzzy avatar May 13 '24 04:05 Tizzzzy

Hi, seems like you're using an old llama-recipes version (pypi releases are sadly lagging behind quite a bit) as we've switched to prepare_model_for_kbit_training some time ago https://github.com/meta-llama/llama-recipes/blob/fb7dd3a3270031e407338027e3f6fbea2b8e431e/src/llama_recipes/finetuning.py#L11

Please update llama-recipes from source by running:

git checkout main && git pull && pip install -U .

in the repo main directory.

mreso avatar May 13 '24 22:05 mreso

Closing this issue , feel free to reopen if there are more questions. btw we just updated the pypi package.

mreso avatar May 17 '24 21:05 mreso