alpaca-electron icon indicating copy to clipboard operation
alpaca-electron copied to clipboard

Pass more tokens

Open Acidbuk opened this issue 1 year ago • 8 comments

Hi, So, if I can ask, what have you set as the default for tokens? also is there a settings file I can tinker with to give it more tokens for context and replies? I don't mind if I make it a bit slower (still faster than trying to run on my GPU), but sometimes when you really get it going with just the right prompt it writes gold. then when it cuts itself off in the middle of a sentence in the middle of a story after running out of tokens, I can't make it remember the context of the previous reply and continue where it left off.

Acidbuk avatar Apr 11 '23 17:04 Acidbuk

The context size is set to max (2048) already

ItsPi3141 avatar Apr 11 '23 18:04 ItsPi3141

is 2048 a hard limit with llama.cpp? or is that a function of the model? I know GPT' 3.5 is somewhere around 4000 but it seems to have a better memory for longer before it goes senile? I'm not sure how they achieve that I suspect they might be offloading a summarised version of the previous post to keep the 'Bot on track?

Acidbuk avatar Apr 11 '23 22:04 Acidbuk

is 2048 a hard limit with llama.cpp? or is that a function of the model?

Yes, 2048 seems to be the hard limit. It allows you to make context go more than 2048, but it warns that performance may be negatively impacted.

ItsPi3141 avatar Apr 12 '23 02:04 ItsPi3141

Are you planning on implementing context? By "context," I mean compressing previous messages and placing them in the prompt like GPT3/4's createChatCompletion. I recently managed to run the Vicuna ggml-vicuna-13b-4bit-rev1.bin model by browsing to its file and loading it. Unfortunately, I encountered a bug in the prompt that generated infinite text output. For instance, when I asked the model to make a song in the style of Marshal Mathers about AI and Humans coexisting, it printed 33 verses before I had to stop it. I suspect the text would have gone on indefinitely. Otherwise, my Lenovo Legion 5I Pro ran both ggml-model-q4_1.bin and the Vicuna model satisfactorily — nearly as fast as GPT-4.

The Chatbot-Ui really caught my attention, and I'm fascinated by the idea of combining it with other UIs like Next.js and Electron. How challenging would this be, and is it even possible?

kendevco avatar Apr 14 '23 01:04 kendevco

Are you planning on implementing context? By "context," I mean compressing previous messages and placing them in the prompt like GPT3/4's createChatCompletion.

In theory, I could do that. But it would make the performance very poor on most computer. OpenAI can do this because they have a bunch of beefy GPUs at their disposal. But this runs locally, sometimes on near-potato hardware.

The Chatbot-Ui really caught my attention, and I'm fascinated by the idea of combining it with other UIs like Next.js and Electron. How challenging would this be, and is it even possible?

I don't know how to use Next.js because I hate HTML frameworks (e.g. bootstrap, vue, angular, react). It probably wouldn't be hard to turn it into an electron app though. If it runs in the web browser, you could just embed that very same page into an electron app and that's it.

ItsPi3141 avatar Apr 14 '23 01:04 ItsPi3141

I haven't tried this yet but it might help increasing prompt size by compressing the prompt.

https://github.com/yasyf/compress-gpt

erkkimon avatar Apr 27 '23 19:04 erkkimon

I haven't tried this yet but it might help increasing prompt size by compressing the prompt.

https://github.com/yasyf/compress-gpt

I'll take a look at how it works later. If it's not too complicated, I'll try to implement something similar.

ItsPi3141 avatar Apr 27 '23 20:04 ItsPi3141

I'm not sure how it affects performance but at least it might be good to be aware of that possibility. At least it's implemented as drop-in-replacement which is quite cool, imho.

erkkimon avatar Apr 28 '23 12:04 erkkimon