LaaZa

Results 113 comments of LaaZa

So you just want different prompts? You can use the default or notebook(basically the whole context) to write or select a pre-made prompt. To add new prompts just make a...

You may be able to load it as a normal model but I'm not sure. Set `--wbits` to none(remove the option or in the webui set to none) so it...

Try to load a GGML model to run on cpu and system ram instead. You have no hope of running anything on that gpu.

In llama-cpp-python [0 is random](https://github.com/abetlen/llama-cpp-python/blob/2f2ea00a3db0b1dd45af56bfa0bf9464b819982b/llama_cpp/llama.py#LL93C13-L93C45)

Might be your other params. Increase the temperature or something.

I don't think that laptop even has a dedicated GPU and it's AMD. And eitherway it would be very unlikely you could load so large model on anything but the...

> This seems similar to https://github.com/vladmandic/automatic/issues/205, which is supposedly the same as https://github.com/oobabooga/text-generation-webui/issues/819, which is supposedly fixed by https://github.com/oobabooga/text-generation-webui/pull/1089. > > Do you get the same error if you simply...

You did not install support for GPU. But I don't know what your gpu is, so hard to say if it can be used at all. Either try installing again...

I think you should use a GGML model instead. You are going to run into issues with memory otherwise. [TehVenom/Pygmalion-7b-4bit-Q4_1-GGML](https://huggingface.co/TehVenom/Pygmalion-7b-4bit-Q4_1-GGML) Here is a GGML model of Pymalion-7B, which is the...

> okay, sorry but I just don't know what to do with the information you provided above. the webui.py won't stay open after couple second and I'm just going reinstall...