Propheticus

Results 18 issues of Propheticus

### Filter Mode Auto-Smart Mode ### Select the Problem A type of spammer is not detected at all ### (Optional) If 'Other', Enter Very Short Description _No response_ ### Spammer...

Filtering Suggestion

**Describe the bug** Since installing v0.4.9-343 I can no longer attach documents in the chat if it's not the first message in a thread. Also there can be no assistant...

P0: critical
type: bug

**Describe the bug** After attaching even a small 8 page document; When asking a follow-up questions about a topic that is discussed in the middle, no correct answers are given....

type: bug

Jan's API server responds with a leading space. This leads to broken output (markdown tables don't render right) and illegal file names when the output is used to generate note...

P2: nice to have
type: bug

When calling the /chat/completions API endpoint without `"stream": true` set, the response is indeed a single JSON object of type "chat.completion" and not a streaming of multiple server event lines...

P1: important
type: bug

While chatting on this nightly version, with a context size set to 20k tokens, conversations already start to break at the 3rd reply. When this happens, even new threads are...

type: bug

Regenerating answers leads to strange output ![image](https://github.com/janhq/jan/assets/6628064/dccf78a1-e507-4a24-b350-8f6294750172) ![image](https://github.com/janhq/jan/assets/6628064/73268be5-fa5e-4bd6-94ac-8ed4667dc590) Using Mistral 7B Instruct v0.2 Q5_K_M with chat prompt `[INST] {prompt} [/INST]` running on Vulkan acceleration.

type: bug

To properly run Llama 3 models, you need to set stop token ``. This is currently not configurable when running Jan in API server mode. The model is automatically loaded...

type: bug

Like the title says the default value is set for every new thread. After picking a model where the model.json defines a ctx_len of e.g. 20000, the context is still...

type: bug

I've made a quick bat file to automatically start LMStudio server + Load the model and then start AnythingLLM. This works, with one caveat; I can't start the embedding model...

enhancement