aoke79

Results 8 issues of aoke79

I've modified the code https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/chatglm3/streamchat.py to test the perf of both first token and avg perf, which are lower. please help check that. ```python if __name__ == '__main__': parser =...

user issue

(env_p311) C:\AIGC\llama\ipex-llm\python\llm\dev\benchmark\harness>python run_llb.py --model ipex-llm --pretrained "C:\AIGC\hf\Meta--Llama-3-8B-Instruct" --precision sym_int4 --device xpu --tasks piqa --batch 1 --no_cache Could not import signal.SIGPIPE (this is expected on Windows machines) C:\ProgramData\anaconda3\envs\env_p311\Lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to...

user issue

Dear, i've tried phi-3-mini with phi-2 code, and found below errors, might you please have a look at it? (env_p311) C:\AIGC\llama\ipex-llm\python\llm\example\GPU\HF-Transformers-AutoModels\Model\phi-3-ed>python ./generate.py --repo-id-or-model-path "C:\AIGC\hf\Phi-3-mini-128k-instruct" --prompt "What is AI?" C:\ProgramData\anaconda3\envs\env_p311\Lib\site-packages\torchvision\io\image.py:13: UserWarning:...

user issue

I've run the benchmark to test stable diffusion v1.5, but the quality of generated image is low. I doubted there are something computing wrong in the process. I compared it...

Dear, I've run the llm_bench\python to test stable diffusion v1.5, and found some regression between different ov packages, like below, 1. 2023.3->8.73s 2. 2024.0->10.65s 3. 2024.1->9.18s the parameters is 20...

### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? when I downloaded the latet code...

Dear, I've tried to port uform/moondream2 into IA platform by BigDL, however, they failed. might you please have a look? I've attached the source code FYI. [Uploading moondream.zip…]() Thanks a...

user issue

Dear, I followed the example code https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/PyTorch-Models/Model/llava to implement llava 1.2 with https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b, and it shows a lot of message, and the answer looks a little weird. [log_code_1.2_model_1.6_0318_fp32_02.txt](https://github.com/intel-analytics/BigDL/files/14631412/log_code_1.2_model_1.6_0318_fp32_02.txt)

user issue