jan
jan copied to clipboard
epic: Jan Context Length issues
Goal
- Jan needs an elegant way to deal with model context length issues
Possible Scope
- e.g. Logic for Thread > context length?
- e.g. User can adjust the context length to the model within model bounds
- e.g. Can support longer context length support if model supported and hardware supported
- e.g. Jan has adaptive context length, given GGUF or model.yaml, and hardware detection
Linked Issues
- [ ] https://github.com/janhq/jan/issues/2193
Cortex Issue
- [x] https://github.com/janhq/cortex.cpp/issues/1151
Original Post
Problem In some cases, users can use the model to exceed the limit of 4096 tokens (~4000 words). But we haven't implemented any solutions to handle it.
Success Criteria
- Have an alert that notifies users are exceed the context length
- We can delete the very first user message (not the system) when exceed the context length
Additional context Bug:
@imtuyethan
As discussed with @hahuyhoang411:
- Error when thread exceeds the context length
- Recommend users to delete message by themselves or create a new thread
Design:
https://www.figma.com/file/ytn1nRZ17FUmJHTlhmZB9f/Jan-App-(version-1)?type=design&node-id=6847-111809&mode=design&t=ErX19MBkMjVhBSjO-4
(This is the MVP for now, in the future we will have a standardized error format that will direct users to Discourse forum & users can see the answer there, see specs: https://www.notion.so/jan-ai/Standardized-Error-Format-for-Jan-abea56d32d6648bb8c6835f9176f800c?pvs=4)
Will this issue be improved? 4000 is too few conversations
As discussed with @hahuyhoang411:
- Error when thread exceeds the context length
- Recommend users to delete message by themselves or create a new thread
Design:
https://www.figma.com/file/ytn1nRZ17FUmJHTlhmZB9f/Jan-App-(version-1)?type=design&node-id=6847-111809&mode=design&t=ErX19MBkMjVhBSjO-4
(This is the MVP for now, in the future we will have a standardized error format that will direct users to Discourse forum & users can see the answer there, see specs: https://www.notion.so/jan-ai/Standardized-Error-Format-for-Jan-abea56d32d6648bb8c6835f9176f800c?pvs=4)
How about a 'sliding window' that only uses the last X messages that fit in the context length? The number of evaluated (prompt) and generated tokens are reported after every call, so the data is there. If the last inference evaluated+generated tokens comes close to the max context, you need to start excluding the first turn.
I do not know if there are best practices regarding this but I'd just suggest to maybe not exclude the very first message as I believe most users set the stage with the first message. I could imagine there being some sort of placeholder put in between the first and the next query, when excluding message(s), like 'There have been messages in between these ones, that have been removed due to a moving context length window. Pretend this bit makes sense but disregard it as context going forward.'
inspiration from the competition: