RedPinkRetro
RedPinkRetro
I have 16gb vram and ran into the same error with settings all the way down to 512 context length and only one layer on the gpu, without difference.
I deleted the model already and got a different one, but I believe it is related to this llama.cpp bug https://github.com/ollama/ollama/issues/6048, which should be resolved by now
@nichjamesr sorry, I forgot to add that I got a nice [abliterated model](https://ollama.com/AutumnAurelium/llama3.1-abliterated) through the ollama page to just use with cmd. Works nicely, but unfortunately no solution for this...
Maybe the models made during the buggy llama.cpp version need to get patched themselves as well to be compatible again? Did you try looking for some very new ones, just...
Having the same issue with the bottom right transparency. Copy-dragging a node somewhere further down bottom right to work around that issue is a bit inconvenient In case it helps...