KiwiHana

Results 7 issues of KiwiHana

Hi, OS: windows 10, Arc A750 Driver: 5081 请问chatglm3和Baichuan2-7B随着对话次数增加,内存不停增大。用这个KV cache demo也不能解决: demo link: https://github.com/intel-analytics/BigDL/blob/main/python/llm/portable-zip/chat.py#L201 bigdl-core-xe-21 2.5.0b20240111 bigdl-llm 2.5.0b20240111 intel-extension-for-pytorch 2.1.10+git8ff85d6 torch 2.1.0a0+cxx11.abi torchvision 0.16.0a0+cxx11.abi ``` python chat_chatglm3_kv.py --model-path="./models/chatglm3-6b-int4" C:\ProgramData\miniconda3\envs\llmsd_env\lib\site-packages\torchvision\io\image.py:13: UserWarning:...

user issue

### Describe the bug refer to this guide to install Ipex on windows10 i7-1185G7: https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installations/windows.html VS 2022 oneAPI 2023.2 python 3.9 Miniconda ``` conda create -n ipex python=3.9 conda install...

ARC
UserExperience
Windows

source: https://github.com/dlstreamer/dlstreamer/blob/master/samples/ffmpeg_openvino/cpp/decode_inference/build_and_run.sh ``` dlstreamer/samples/ffmpeg_openvino/cpp/decode_inference$ source /opt/intel/openvino_2024/setupvars.sh source /opt/intel/dlstreamer/setupvars.sh [setupvars.sh] OpenVINO environment initialized [setupvars.sh] GStreamer 1.20 framework initialized [setupvars.sh] GStreamer 1.20 plugins path initialized [setupvars.sh] Intel(R) DL Streamer environment initialized test@test:~/kiwi/dlstreamer/samples/ffmpeg_openvino/cpp/decode_inference$...

``` dlstreamer/samples/ffmpeg_openvino/cpp/decode_resize_inference$ ./build_and_run.sh /home/test/kiwi/test_1080p_15.mp4 /home/test/kiwi/yolov5s_openvino_model/yolov5s.xml CMake Warning (dev) in CMakeLists.txt: No project() command is present. The top-level CMakeLists.txt file must contain a literal, direct call to the project() command. Add...

基于Intel 13代CPU的大语言模型应用开发指南 请review。

https://github.com/THUDM/GLM-4 https://huggingface.co/THUDM/glm-4-9b

user issue

oneapi 2024.0, Ubuntu22.04, A770 ``` #Model Download from modelscope import snapshot_download model_dir = snapshot_download('OpenBMB/MiniCPM-Llama3-V-2_5') import torch from PIL import Image from ipex_llm.transformers import AutoModel #from transformers import AutoModel from transformers...

user issue