Lim Xiang Yang
Lim Xiang Yang
Due to the requirements from mixtral that needs transformers==4.36.0, I tried to run the finetuning example but hit into error when I am trying this.
**Describe the bug** No results are seen when running the notebooks with iGPU on MTL platform. The Inference is done but no bounding box is seen, However, when change the...
## Overview - Intel's Lunar Lake is releasing soon, which has CPU, NPU and iGPU in a single chip ## Tasklist - [x] https://github.com/janhq/cortex.cpp/issues/677 - [x] https://github.com/janhq/cortex.llamacpp/issues/107 - [ ]...
Hi, I am having issue in setting up smart edge open on branch 22.03.02. The issue that I have encounter is the playbook does not able to find the role...
This PR is to fix the error where the incorrect version of outlines package used. FIX #6261 **BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE**...
### Your current environment ```text Collecting environment information... WARNING 07-09 19:49:30 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") PyTorch version: 2.3.0+cpu Is debug build: False CUDA...
The model quantization process does not exit after quantization successfully. 
- Adding the steps to compile Ollama to run on Intel(R) discrete GPU platform - Adding the discrete GPU that have been verified in the GPU docs
Running mistral with `transformers==4.42.3` will have the following error present and unable to run. `mistral_model_forward_4_36() got an unexpected keyword argument 'cache_position'`