Ritu Raj
Ritu Raj
@WenjiaoYue, upgrading the npm version solve dependency problem. Now, I have started the server, but it doesn't start any communication. For example, I am using the default template and starting...
@WenjiaoYue, that's great now, I can record the audio but still it is unresponsive. it stops whenever trying to generate a response. 
@qiyuangong , we are not importing 'intel_extension_for_pytorch' in inference.py, it just need the function from the utils 'LLM' **here is our sample code:** ``` import os import fire from utils...
I got this: ``` INFO: pip is looking at multiple versions of ipex-llm[xpu] to determine which version is compatible with other requirements. This could take a while. ERROR: Could not...
@qiyuangong, I think somewhere it got break.. ------------ **case 1: so, if I use old xpu like this:** `pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/` I have below problem: ```...
@qiyuangong , I was just using these example on arc gpu: https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/PyTorch-Models/Model/llama2 https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference
@Uxito-Ada Thanks for the update :) in my opinion issue is not there 'from peft import LoraConfig'; its actually coming from calling the **"DistributedType"** from the wrong file, I have...