Rene Leonhardt
Rene Leonhardt
> That's the best idea because some models are exclusively for code completion (very fast for 1 line answers) and others only for chatting/instruction following (`-it` suffix): https://huggingface.co/google/codegemma-7b-it#description As you...
Thank you for reporting, glad to see a Linux user! Can you check your current setting and increase it? https://stackoverflow.com/questions/32281277/too-many-open-files-failed-to-initialize-inotify-the-user-limit-on-the-total#answer-38486048 If you don't need the FileWatcher you can disable it...
Do you have a screenshot, is there an exception stacktrace in the log? I don't think they are related, you probably chose a model which is "too big" for your...
@niceapps-ch Thank you for reporting, can you reproduce this problem in 2.5.1? When I click on delete next to any response, that response and my corresponding prompt are removed, all...
2 URLs are available: * http://localhost:11434/api/chat (GUI Client uses this) * http://localhost:11434/v1/chat/completions (CodeGPT uses this) If you change CodeGPT to /api/chat you will see a blue Test Connection but an...
You speak my mind, I wanted this feature many times too 😅 Disabling auto-scrolling would be awesome. And for users paying for services / API keys a stop button maybe...
@ChuangLee Thank you for reporting, I was able to fix this problem.
@carlrobertoh I can't really find handling of escaped quotes or corresponding tests in llm-client 😅 Maybe integration tests could catch that? When I insert `\\\"` into some requests and responses...
@AlexanderLuck What service / model service are you using? `ollama run codellama` doesn't generate that code including escapes with that prompt and empty context.
Can you reproduce the bug and post the stacktrace? * `Help / Show log in Finder` (on Mac) * Open `idea.log` * Search from the end of the file: `at...