ipex-llm
ipex-llm copied to clipboard
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Ma...
Dear, I've tried to port uform/moondream2 into IA platform by BigDL, however, they failed. might you please have a look? I've attached the source code FYI. [Uploading moondream.zip…]() Thanks a...
Dear, I followed the example code https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/PyTorch-Models/Model/llava to implement llava 1.2 with https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b, and it shows a lot of message, and the answer looks a little weird. [log_code_1.2_model_1.6_0318_fp32_02.txt](https://github.com/intel-analytics/BigDL/files/14631412/log_code_1.2_model_1.6_0318_fp32_02.txt)
Llama 3 performance dropped greatly from 4.37.2 to 4.38.0 on recent package like 0515 or 0516.
## Description This PR basically adds internal oneccl support for TP. Also changed the oneccl_bind_pt used for the image.
Failed to run python `offline_inference.py` from [link](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/vLLM-Serving) for vLLM offline inference on CPU. It seems that `llm.py` has been removed in the previous version.
I m trying to save a int4 quantized model. When i try to save it , i get this error when trying to solve the issue. Traceback (most recent call...
Here is log when running LLM with meta-llama/Meta-Llama-3-8B-Instruct. Do you know how to train this model on a down-stream task? Thanks ----- Some weights of LlamaForCausalLM were not initialized from...
https://github.com/adonide/ChatGLM-vits-Unity-Live2D/tree/main Error when running "python start.py" ![image](https://github.com/intel-analytics/ipex-llm/assets/33850226/125e6b3c-946c-474d-aa22-375ca8827e1b) 设备名称 ultra 处理器 Intel(R) Core(TM) Ultra 9 185H 2.30 GHz 机带 RAM 32.0 GB (31.6 GB 可用) 设备 ID 31491060-0A0A-472C-B853-6F171FCE28EE 产品 ID 00342-31603-86596-AAOEM...
Hello ipex-llm experts, I suffers issue about Llama-3-8B on MTL-H's iGPU and need any advice from you. :) It seems to have issue with iGPU in MTL 155H but no...