Eduardo Alvarez
Eduardo Alvarez
### Describe the issue Typos and notes in this code sample. - In the "stage 2" section you have a typo -> ipex.llm.ptimize shoudl be -> ipex.llm.optimize - You have...
### Describe the bug When running this: ```python import torch import intel_extension_for_pytorch as ipex from transformers import AutoTokenizer, AutoModelForCausalLM # PART 1: Model and tokenizer loading tokenizer = AutoTokenizer.from_pretrained("Intel/neural-chat-7b-v3-3") model...
ValueError: Unsupported huggingface version: 4.34.1. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.26.0,...
```python from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v3-3" # for int8, should set weight_dtype="int8" config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int4") prompt = "Once upon a time,...
The daal_xgb_model.py module has a function called get_gbt_model_from_xgboost. This was presumably added because it wasn't available in the daal4py library. For simplicity of code, it should be removed and replaced...