Increase input limit for the model
Is there any plan to increase the limit input limit for the model. Right now, i guess there is a limit of 2048 tokens..
Are you guys planning to increase that? Or any idea how we can do it on our end?
That's a good suggestion. But it is hard to increase the limit because of the significant increase in memory and compute. We'll try to investigate, though.
Yeah that makes sense.. but if someone is willing to contribute to open-source what needs to be done?
I did v. superficial digging. Increasing the input token limit requires training the complete model. But since Vicuna is based on LLaMA, just training Vicuna on longer tokens will work or does that require training the LLaMA model too??
@RaiAmanRai Long content LLMs is a hard research problem. We'll investigate this.
But we're unable to provide a significantly longer context in a short time window in this repo. Closing this issue.