Storm
Storm
I have the same problem? How is it solved? Thanks
> > qwen/Qwen-1_8B-Chat > > Hi I think Qwen-7b-chat can work now, and we are trying to enable Qwen-1_8B-Chat. We will keep you in tune. thanks Does Openbino currently support...

Baichuan1-13B-Chat not has same question, so I am very confused
> Are you using the recommended environment variable settings? Try `bigdl-llm-init -t` Yes, I installed bigdl llm according to the instructions in README. However, I am not quite sure what...
> > > Are you using the recommended environment variable settings? Try `bigdl-llm-init -t` > > > > > > Yes, I installed bigdl llm according to the instructions in...
> > > > > Are you using the recommended environment variable settings? Try `bigdl-llm-init -t` > > > > > > > > > > > > Yes, I...
> Can you attach the hardware config using `lscpu`? > > You can try `sudo numactl -C 0-31 python3 mysql_qa_bench.py`. The problem is possibly caused by memory socket bind. 
> > When I conducted experiments using Baichuan 2-13B Chat, I found that the CPU utilization rate of each core was only 20%. I believe it is because BigDL LM...
> Repeat is kind of normal in LLM models. Here is some possible solution: > > 1. Try to use do_sample=True in generate api ? > 2. change woq_config args...