ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

ValueError: 'rope_scaling' with meta-llama/Llama-3.2-1B

Open Kpeacef opened this issue 1 year ago • 1 comments

Hi, I would like to try meta-llama/Llama-3.2-1B in different scenario of IPEX-LLM solutions.

pip list: bigdl-core-xe-21 2.6.0b20241001 intel-extension-for-pytorch 2.1.10+xpu intel-openmp 2024.2.1 ipex-llm 2.2.0b20241001 torch 2.1.0a0+cxx11.abi torchvision 0.16.0a0+cxx11.abi

I encountered several errors:

  1. All-in-one benchmark (INT4 and FP16): raise ValueError( ValueError: rope_scaling must be a dictionary with with two fields, type and factor, got {'factor': 32.0, 'high_freq_factor': 4.0, 'low_freq_factor': 1.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}

  2. example/GPU/HuggingFace/LLM/llama3.2 Following instructions

pip install transformers==4.45.0 pip install accelerate==0.33.0 pip install trl

python ./generate.py --repo-id-or-model-path meta-llama/Llama-3.2-1B works What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

The following is a list of some of the things that we are working on. We are constantly working on new things.

But going back to all-in-one benchmark ImportError: cannot import name 'BenchmarkWrapper' from 'ipex_llm.utils'

Please let us know if we can perform all-in-one benchmark. Thank you.

Kpeacef avatar Oct 01 '24 14:10 Kpeacef

Hi Kpeacef,

We have looked into the issue and reproduced the problem. Here are solutions to this issue:

  1. To the error ValueError: 'rope_scaling', refer to this issue and upgrade transformers to version 4.43.
  2. To the error ImportError: cannot import name 'BenchmarkWrapper' from 'ipex_llm.utils', please refer to the readme of /GPU/HuggingFace/LLM/llama3.2, and change your transformers version to 4.43, version 4.45 hasn't been fully supported by ipex-llm.

cranechu0131 avatar Oct 08 '24 08:10 cranechu0131