David Campos
David Campos
The same issue on stable.
@afergadis: You can try two things: - use a lower number of threads: `./nejiWeb.sh -t 2` - try allocating more memory using the -Xmx option, editing the `./nejiWeb.sh` script
@afergadis Can you please post the complete command line? Which model are you using? Was it trained using the version o Neji that you are using right now?
How about using it with Ollama deepseek r1 distilled models? Does it also offer a streaming option to not send the reasoning tokens?
Yes, using OpenAI-compatible APIs of Ollama. I have been using this awesome tool with Ollama for a long time and I can say I am a delighted customer 😍
> @davidcampos, could you try using one of those models with Ollama (OpenAI API) + Writing Tools? I just tried it with the proofread option. Using Ollama with deekseek-r1 32b....
I have a similar issue. Any ideas?