ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)

Open liu-shaojun opened this issue 1 year ago • 2 comments

Description

1. Why the change?

  • Add a quickstart to run PyTorch Inference on Intel GPU using Docker
  • Update ipex-llm-xpu image to add benchmark & examples

You may check http://10.239.44.83:8008/doc/LLM/Quickstart/docker_pytorch_inference_gpu.html to see the quickstart.

liu-shaojun avatar May 09 '24 06:05 liu-shaojun

We just need one quick start for running PyTorch inference on GPU using Docker (on Linux or WSL):

  1. Install docker
  2. Launch docker
  3. Run inference benchmark
  4. Run chat.py?
  5. Run PyTorch examples

jason-dai avatar May 11 '24 01:05 jason-dai

We just need one quick start for running PyTorch inference on GPU using Docker (on Linux or WSL):

  1. Install docker
  2. Launch docker
  3. Run inference benchmark
  4. Run chat.py?
  5. Run PyTorch examples

Updated

liu-shaojun avatar May 11 '24 06:05 liu-shaojun