Propheticus

Results 310 comments of Propheticus

Of course, they are still copies of other comments so you can find them by looking at duplicates (I could find 3 of the first linked example, 2 copies 1...

I was using the app fully default (no config ini file), apart from my own API credentials of course. Ah, so we're now tweaking the autosmart to be a bit...

It's an optional filter in the config at line 158. According to the comment that accompanies it, because some don't see this as spam... perhaps the owners of those accounts?...

causes https://github.com/longy2k/obsidian-bmo-chatbot/issues/66 and https://github.com/longy2k/obsidian-bmo-chatbot/issues/67

https://github.com/ggerganov/llama.cpp/issues/3664 might be related? (would mean it's in nitro.exe which uses llama.cpp) also: https://github.com/ggerganov/llama.cpp/issues/367#issuecomment-1479348872

Reading the 2 issues above plus https://github.com/ggerganov/llama.cpp/pull/4081 the leading space appears to be added during tokenization on purpose and is even needed for some models to work correctly. I'm still...

Without going further into the rabbit hole of how tokenization works internally and whether it applies to completion.... The [OpenAI API spec](https://platform.openai.com/docs/guides/text-generation/chat-completions-response-format) (and [Mistral AI API](https://platform.openai.com/docs/guides/text-generation/chat-completions-response-format) as well) gives examples...

Looks good to me @Van-QA 👍 tested on Jan v0.4.11-386 nightly.

How about a 'sliding window' that only uses the last X messages that fit in the context length? The number of evaluated (prompt) and generated tokens are reported after every...

inspiration from the competition: ![image](https://github.com/janhq/jan/assets/6628064/c880dfbd-89be-4eba-bbb9-91cda33a817a)