neural icon indicating copy to clipboard operation
neural copied to clipboard

Allow the maximum requested response size (tokens) to be specified in the command

Open w0rp opened this issue 2 years ago • 1 comments

We should permit the number of tokens to be something that can be set at will when text is requested, in addition to being configurable for all prompts, so you can request smaller or larger prompts in different contexts.

w0rp avatar Feb 15 '23 16:02 w0rp

I think with implementing #41 we should be able to dynamically adjust the request for a model source such that it never requests for more tokens than the maximum allowable by the model.

I think the design decision I want to go with is that from a UX perspective, the user shouldn't need to worry about the token length unless they are going over the limit.

I would personally rather give a model the freedom to respond in as many tokens as it can instead of artificially limiting its response. The downside to this is a monetary cost for API sources or computational cost for local sources (Comming Soon TM).

Side note, in the future token limitation might not be a thing to worry about due to Sliding Attention, but that's a different thing to contend with.


With all that said, this will need to be implemented to adjust the max token request dynamically anyway.

Angelchev avatar Dec 29 '23 16:12 Angelchev