Propheticus
Propheticus
### Filter Mode Auto-Smart Mode ### Select the Problem A type of spammer is not detected at all ### (Optional) If 'Other', Enter Very Short Description _No response_ ### Spammer...
**Describe the bug** Since installing v0.4.9-343 I can no longer attach documents in the chat if it's not the first message in a thread. Also there can be no assistant...
**Describe the bug** After attaching even a small 8 page document; When asking a follow-up questions about a topic that is discussed in the middle, no correct answers are given....
Jan's API server responds with a leading space. This leads to broken output (markdown tables don't render right) and illegal file names when the output is used to generate note...
When calling the /chat/completions API endpoint without `"stream": true` set, the response is indeed a single JSON object of type "chat.completion" and not a streaming of multiple server event lines...
While chatting on this nightly version, with a context size set to 20k tokens, conversations already start to break at the 3rd reply. When this happens, even new threads are...
Regenerating answers leads to strange output   Using Mistral 7B Instruct v0.2 Q5_K_M with chat prompt `[INST] {prompt} [/INST]` running on Vulkan acceleration.
To properly run Llama 3 models, you need to set stop token ``. This is currently not configurable when running Jan in API server mode. The model is automatically loaded...
Like the title says the default value is set for every new thread. After picking a model where the model.json defines a ctx_len of e.g. 20000, the context is still...
I've made a quick bat file to automatically start LMStudio server + Load the model and then start AnythingLLM. This works, with one caveat; I can't start the embedding model...