ipex-llm
ipex-llm copied to clipboard
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Ma...
## Description add orca python exampletest ray to run on self hosted
## Description ### 1. Why the change? ### 2. User API changes ### 3. Summary of the change ### 4. How to test? - [ ] N/A - [ ]...
## Description To fix https://github.com/intel-analytics/BigDL/issues/5363 ### 1. Why the change? change bigdl-core version to nightly ### 4. How to test? - [x] Jenkins
## Description Add build-example-test-ppml workflows
## Description According to the customer feedback, there is no `SparkContext` error while loading HDFS file in ray workers, which is because users use `enable_multi_fs_load` decorator to load HDFS file...
## Description ### 1. Why the change? update tpp LICENSE and NOTICE ### 2. User API changes None ### 3. Summary of the change Manually add components to the BOM...
## Description Add a ssl/tls section of Secure Your Devices README
## Description add later ### 1. Why the change? add later related issue: #4791 ### 2. User API changes No changed ### 3. Summary of the change add later ###...
## Description ### 1. Why the change? To fix the Notebook issues ### 2. User API changes No ### 3. Summary of the change 1. Removed the `NOTEBOOK_TOKEN` and `NOTEBOOK_PORT`...
## Description Add 5 examples using Nano HPO to tune the hyperparameters in tensorflow training. ### 1. Why the change? Divide the tutorial of how to use Nano HPO to...