Juqowel

Results 9 comments of Juqowel

> ### What should have happened? > No artifacts? Nope. It's normal behavior.

> You think so? I think there are TOO MANY artifacts here. That's how it works with this model.

> cd text-generation-webui call python server.py --chat --pre_layer 31 --wbits 4 --groupsize 128 --model gpt4-x-alpaca-13b-native-4bit-128g Model name must match folder name: `--model anon8231489123_gpt4-x-alpaca-13b-native-4bit-128g`

4-bit models get loaded into RAM before being sent to VRAM. 16GB RAM - minimum for 13b-4bit model. (10-11GB free RAM).

> [Pythia][OPT][GALACTICA][GPT-J 6B] These are just examples. Not only these are supported. Vicuna also works.

> My first instinct was to add `llama` to the file name; that didn’t work. File name or folder name? Try to add "alpaca" to the folder name.

> I just following all steps from "Installation" instructions All steps? Have you tried just installing via One-click installer?

> also, yes I download a mpt-7b-instruct model, but I have no idea how to install einops by local conda, because call to path python directs "pip install einops" to...

> there is no file with that name :| Yes, I see. Now it's called `cmd_windows.bat`