ipex-llm
ipex-llm copied to clipboard
[BMG]: Please add a guidelines to perform fine-tuning and inferencing on multi-gpu BMG machines
Describe the bug A new guidelines is needed to perform fine-tuning and inferencing on multi-gpu BMG machines.
Issue:
- Now, we are using following xpu libraries -->[bmg-xpu]( pip install --pre --upgrade ipex-llm[xpu_2.6] --extra-index-url https://download.pytorch.org/whl/xpu)
- which doesn't requires additional one-api installation and other packages.
- most of the document related to gpu parallelism to perform fine-tune and inference are based on old xpu version
- such as -->example
- inf-mutli-gpu
- finetuning-multi-gpu-bmg
- need an additional documentation for multi-gpu support for BMG machines.