text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

installer did not auto install tqdm also stuck after running "start-webui.sh"

Open G2G2G2G opened this issue 1 year ago • 2 comments

after running the ./install.sh as it says in the readme

went to run:

python download-model.py OpenAssistant/oasst-sft-1-pythia-12b

from: https://github.com/oobabooga/text-generation-webui/issues/253

which resulted in an error missing tqdm

had to run pip3 install tqdm then all worked

Which one do you want to load? 1-2

2

Loading oasst-sft-1-pythia-12b...
Warning: no GPU has been detected.
Falling back to CPU mode.

Loading checkpoint shards:   0%|                  | 0/3 [00:00<?, ?it/s]./start-webui.sh: line 8: 2491922 Killed                  python server.py --auto-devices --cai-chat

how do I get past that error?

I just ran the stuff from the file and instead of having it prompt me which one to load, I did the manual loading like in the other thread:

python3 server.py --model "oasst-sft-1-pythia-12b"

working so far..

model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16).cuda()

RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

guess not

ran with --cpu and it now is loading up more than 32gb of ram, so guess I'll buy some more ram and come back in a day

G2G2G2G avatar Mar 12 '23 09:03 G2G2G2G

Part 1 is fixed https://github.com/oobabooga/text-generation-webui/commit/3c25557ef0e1c727dfdadf5a5f9a53c533a82fe0

Part 2 means that your GPU was not recognized. Is it nvidia? Do you have the latest drivers installed?

oobabooga avatar Mar 12 '23 11:03 oobabooga

No it said

Falling back to CPU mode.

so I thought that's what it did, but it doesn't seem so. Running with --cpu worked and I need more RAM, this one uses way more ram than https://github.com/ggerganov/llama.cpp does which is my only experience. I figured this smaller model would use more similar to 4gb RAM (not vram) or so which is what the 7B llama uses.

I did buy new ram (in the last line), it should be here in a few days I think it should work then so I guess close this lmao

been reading the last few hours to learn more stuff on it.

G2G2G2G avatar Mar 12 '23 13:03 G2G2G2G

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.

github-actions[bot] avatar Apr 11 '23 23:04 github-actions[bot]