Shaojun Liu

Results 22 comments of Shaojun Liu

> At the beginning of the tutorial (before the table of contents), we need a very short (one or two sentences) description that talks about what BigDL PPML is from...

I got the same error when running job in the self-hosted runners `The self-hosted runner: xxxx lost communication with the server. Verify the machine is running and has a healthy...

Hi @sungkim11 are you following this document: https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/docker_windows_gpu.html# to install IPEX-LLM in Docker, we will try to reproduce from our side and try the method suggested by @digitalscream.

Hi @sungkim11 I tried running a Docker container (intelanalytics/ipex-llm-xpu) on a machine equipped with an A770 processor and integrated graphics (iGPU) following the steps in the [documentation](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/docker_windows_gpu.html#). After disabling the...

We are working on a fix, and we'll update this issue when it's fixed. Alternatively, you can try using A770 for graphical interface support.

> We just need one quick start for running PyTorch inference on GPU using Docker (on Linux or WSL): > > 1. Install docker > 2. Launch docker > 3....

Could you provide env information using https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/scripts?

Hi @HumerousGorgon We recommend setting up your environment with **Ubuntu 22.04** and **Kernel 6.5**, and installing the **Intel-i915-dkms** driver for optimal performance. Once you have Ubuntu OS set up, please...

It might be related to the CPU/GPU frequency. You can try adjusting the CPU/GPU frequency to see if it has any impact. For **CPU frequency**, you can use `sudo cpupower...

Currently, our VLLM does not support multimodal models. Support for multimodal models is ongoing in the 0.5.x version of VLLM. We will notify you once it's ready.