Ritu Raj

Results 11 issues of Ritu Raj

I am trying to explore the backend server. After resolving dependencies issues, I tried to start the server but system doesn’t shows any running backend server neither logs helps out...

aitce

followed the guidelines mentioned here: https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/deployment/talkingbot/server/backend/README.md **first error**: positional argument 'model_type' is missing, which is not given in example ``` TypeError Traceback (most recent call last) Cell In[17], line 7...

**followed the guidelines mentioned here:** https://github.com/intel/intel-extension-for-transformers/tree/main/intel_extension_for_transformers/neural_chat/ui/customized/talkingbot process failed during dependencies ``` npm WARN deprecated @types/[email protected]: This is a stub types definition. sass provides its own type definitions, so you do...

**base-model: Weyaxi/Dolphin2.1-OpenOrca-7B** **Scenario:** - followed the following guidelines - https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LLM-Finetuning/QLoRA/alpaca-qlora#1-install - Fine-Tuning method alpaca-qlora (IPEX-LLM) - after FT when trying to merge the model have an torch data_type issue ```...

user issue

**Scenario:** - completed the fine tune on 'Weyaxi/Dolphin2.1-OpenOrca-7B' using ipex-llm on gpu max 1100 - output directory look like as below with checkpoints and config file. - ![image](https://github.com/intel-analytics/ipex-llm/assets/110594625/a3e13539-b34c-4828-9402-3790b3621efb) - made...

user issue

Trying to do inference on arc GPU machine, have followed this guidelines: ``` https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference and run_mistral_arc_2_card.sh ``` ``` (llm) :~/xxx/ipex-llm/python/llm/example/GPU/Pipeline-Parallel-Inference$ bash run_llama_arc_2_card.sh :: WARNING: setvars.sh has already been run. Skipping...

user issue

GPU: 2 ARC CARD running following example, [inference-ipex-llm](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference) **for mistral and codellama (working for llama2)** ``` My guessed rank = 1 My guessed rank = 0 2024-06-24 11:32:19,965 - INFO...

user issue

Using ipex-llm docker version for inferencing, but during inference time it experiences errors from util files below is the log: ``` ------------------------------------------------------------------------------------------------------------------------ Inferencing ./samples/customer_sku_transformation.txt ... ------------------------------------------------------------------------------------------------------------------------ The installed version of...

user issue

### Describe the bug created a conda env with python 3.11, have setup with all required library. ran sanity test, it fails to import pkg_resources: `python -c "import torch; import...

NotAnIssue

**Describe the bug** A new guidelines is needed to perform fine-tuning and inferencing on multi-gpu BMG machines. Issue: 1. Now, we are using following xpu libraries -->[bmg-xpu]( pip install --pre...

user issue