text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

os error

Open lakshyaaaaaaa opened this issue 3 years ago • 2 comments

Describe the bug

Gradio HTTP request redirected to localhost :) bin C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " Loading mayaeary_pygmalion-6b_dev-4bit-128g... Warning: torch.cuda.is_available() returned False. This means that no GPU has been detected. Falling back to CPU mode.

Traceback (most recent call last): File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 918, in shared.model, shared.tokenizer = load_model(shared.model_name) File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py", line 209, in load_model model = LoaderClass.from_pretrained(checkpoint, **params) File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\mayaeary_pygmalion-6b_dev-4bit-128g.

Is there an existing issue for this?

  • [X] I have searched the existing issues

Reproduction

Gradio HTTP request redirected to localhost :) bin C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " Loading mayaeary_pygmalion-6b_dev-4bit-128g... Warning: torch.cuda.is_available() returned False. This means that no GPU has been detected. Falling back to CPU mode.

Traceback (most recent call last): File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 918, in shared.model, shared.tokenizer = load_model(shared.model_name) File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py", line 209, in load_model model = LoaderClass.from_pretrained(checkpoint, **params) File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\mayaeary_pygmalion-6b_dev-4bit-128g.

Screenshot

No response

Logs

Gradio HTTP request redirected to localhost :)
bin C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll
C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading mayaeary_pygmalion-6b_dev-4bit-128g...
Warning: torch.cuda.is_available() returned False.
This means that no GPU has been detected.
Falling back to CPU mode.

Traceback (most recent call last):
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 918, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py", line 209, in load_model
    model = LoaderClass.from_pretrained(checkpoint, **params)
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
    return model_class.from_pretrained(
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained
    raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\mayaeary_pygmalion-6b_dev-4bit-128g.

System Info

Gradio HTTP request redirected to localhost :)
bin C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll
C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading mayaeary_pygmalion-6b_dev-4bit-128g...
Warning: torch.cuda.is_available() returned False.
This means that no GPU has been detected.
Falling back to CPU mode.

Traceback (most recent call last):
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 918, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py", line 209, in load_model
    model = LoaderClass.from_pretrained(checkpoint, **params)
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
    return model_class.from_pretrained(
  File "C:\Users\LAKSHYA\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained
    raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\mayaeary_pygmalion-6b_dev-4bit-128g.

lakshyaaaaaaa avatar Apr 24 '23 18:04 lakshyaaaaaaa

Might be a model issue? I'm getting the same error with pygmalion-6b-gptq-4bit

jfranecki avatar Apr 29 '23 07:04 jfranecki

I get the same error with facebook_galactica-6.7b model...

Gradio HTTP request redirected to localhost :) bin C:\Users\zach1\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll Loading facebook_galactica-6.7b... Traceback (most recent call last): File "C:\Users\zach1\oobabooga_windows\text-generation-webui\server.py", line 914, in shared.model, shared.tokenizer = load_model(shared.model_name) File "C:\Users\zach1\oobabooga_windows\text-generation-webui\modules\models.py", line 84, in load_model model = LoaderClass.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16, trust_remote_code=trust_remote_code) File "C:\Users\zach1\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "C:\Users\zach1\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\facebook_galactica-6.7b.

zark119 avatar May 02 '23 17:05 zark119

same issue

same with TheBloke/WizardLM-7B-uncensored-GPTQ

StingrayA avatar May 20 '23 21:05 StingrayA

same with TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ

ohh25 avatar May 22 '23 15:05 ohh25

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

github-actions[bot] avatar Aug 28 '23 23:08 github-actions[bot]