ipex-llm
ipex-llm copied to clipboard
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)
Description
1. Why the change?
- Add a quickstart to run PyTorch Inference on Intel GPU using Docker
- Update ipex-llm-xpu image to add benchmark & examples
You may check http://10.239.44.83:8008/doc/LLM/Quickstart/docker_pytorch_inference_gpu.html to see the quickstart.
We just need one quick start for running PyTorch inference on GPU using Docker (on Linux or WSL):
- Install docker
- Launch docker
- Run inference benchmark
- Run chat.py?
- Run PyTorch examples
We just need one quick start for running PyTorch inference on GPU using Docker (on Linux or WSL):
- Install docker
- Launch docker
- Run inference benchmark
- Run chat.py?
- Run PyTorch examples
Updated