Pawe98
Pawe98
Look at issue: https://github.com/wackerl91/luna/issues/171 But I don't understand your provided logs
other differences found, but maybe they are intentional (would be cool if I can change them): llama_new_context_with_model: n_ctx = 2048 vs llama_new_context_with_model: n_ctx = 4096 and some cache and buffer...
multiple models have been tested, this is the current config.json I use which does create the same behaviour as before: ``` { "models": [ { "model": "codellama:7b", "title": "Ollama", "provider":...
Ive merged these changes locally into main and build the app. Ive installed it on my nvidia shield. Ive also added a toast message into the toggleKeyBoard method. However, the...