Robert Sinclair
Robert Sinclair
> I suspect this is the same as [#17310](https://github.com/microsoft/terminal/issues/17310). Could you try OpenConsole 1.22.22409.26002 from the release below? I bet it's fixed there. 🙂 https://github.com/microsoft/terminal/releases/tag/v1.22.2702.0 Testing it right now. So...
problem still present in latest version ( https://github.com/microsoft/terminal/releases/tag/v1.21.2911.0 )  not present in version https://github.com/microsoft/terminal/releases/tag/v1.22.2702.0
Also not present in version https://github.com/microsoft/terminal/releases/tag/v1.22.2912.0
For reference, this is the output I get with the same model using b4000: ``` build: 4000 (c02e5ab2) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: llama backend init main:...
> -no-cnv is it -no-cnv or --no-cnv ?
Yes, please! I have a similar notebook (GTX980M)... the only versions that work are the avx2 and the clblast! So now I am stuck to the latest version compiled with...
with MSYS2 I just did: ``` git checkout 0cd6bd3483fa66124b76a8a8ac794d9ee18c70c1 pacman -Su mingw-w64-clang-x86_64-clblast pacman -Su mingw-w64-clang-x86_64-opencl-clhpp cmake -B build -DLLAMA_NATIVE=ON -DLLAMA_BUILD_SERVER=ON -DLLAMA_CLBLAST=ON -DBUILD_SHARED_LIBS=OFF cmake --build build -j $(nproc) ``` And the...
> GTX900 should have both CUDA and Vulkan support both of which should be faster and better supported than OpenCL. In any case, unless someone volunteers to maintain the OpenCL...
even an old laptop/old gpu could have some value with this enabled. cuda libraries are HUGE and in the end I did not notice any improvement in speed .. with...
> I think the main feature of llama.cpp is it's efficiency. clblast just added to it. I tried other solutions like ollama, but **none** worked well. llama.cpp is still the...