FlareP1

Results 7 comments of FlareP1

Yes agree with @dranger003 above, local compile does not fix the issue. I also tried the cublas and clblas both options produce gibberish. I only have one GPU. Do I...

> In llama.cpp line 1158 there should be: > > ``` > vram_scratch = n_batch * MB; > ``` > > Someone that is experiencing the issue please try to...

> It is working as intended on my machines which all run Linux. The first step for me to make a fix is to be able to reproduce the issue...

I have reverted the changes and checked out the 44f906e8537fcec965e312d621c80556d6aa9bec commit. On my version of WSL2 this still does not work and gives the same out of memory error so...

> > > It is working as intended on my machines which all run Linux. The first step for me to make a fix is to be able to reproduce...

> I have bad news: on my main desktop I am not experiencing the bug when using Windows. I'll try setting up llama.cpp on the other machine that I have....

> I have bad news: on my main desktop I am not experiencing the bug when using Windows. I'll try setting up llama.cpp on the other machine that I have....