stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

Can't load tokenizer for 'openai/clip-vit-large-patch14'

Open TsundereGod opened this issue 1 year ago • 1 comments

I have tried troubleshooting this issues and asking others for help, but I can't seem to get this issue fixed.

Creating model from config: C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Failed to create model quickly; will retry using slow method.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
loading stable diffusion model: OSError
Traceback (most recent call last):
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\webui.py", line 136, in initialize
    modules.sd_models.load_model()
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\modules\sd_models.py", line 406, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 99, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version)
  File "C:\Users\Kevin\Downloads\AI Art\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.


Stable diffusion model failed to load, exiting
Press any key to continue . . .

Novice here, so thanks for any help!

TsundereGod avatar Mar 12 '23 16:03 TsundereGod

It means that "openai/clip-vit-large-patch14" didn't download or unpack properly, which is a problem with the webui front end installation. Please make sure you have git-lfs installed and activated before downloading any models. webui has it's own forums, so questions about it should be directed there. Cheers.

spikethehobbitmage avatar Mar 15 '23 06:03 spikethehobbitmage

From the error-message I am assuming your question is about the stable diffusion webui by AUTOMATIC1111. Please be aware this is a third-party tool and we cannot provide any support for it. You may want to look into the issues there, both closed and open to find out if other people have reported this problem before, or open a new issue if that is not the case.

jenuk avatar Apr 06 '23 12:04 jenuk