diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Vague error: `ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.` how to fix?

Open openSourcerer9000 opened this issue 1 year ago • 11 comments

Trying to convert a .safetensors stable diffusion model to whatever the format is that hugging face requires. It throws a vague nonsequitur of an error:

pipe = diffusers.StableDiffusionPipeline.from_single_file(str(aPathlibPath/"vodkaByFollowfoxAI_v40.safetensors") )

   [1241](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1241)     )
   [1242](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1242) else:
   [1243](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1243)     return {"text_encoder": text_encoder, "tokenizer": tokenizer}

ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.

What tokenizer? What path? Where would I get this file? This script already downloaded something locally, why not download this extra thing as well instead of throwing an error?

When I pass local_files_only=True, it says the SAME thing: ValueError: With local_files_only set to True, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.

openSourcerer9000 avatar Feb 25 '24 15:02 openSourcerer9000

Hi @openSourcerer9000 could you share which script are you using to try to convert your file? Additionally could you provide the exact command that you used to run it please?

DN6 avatar Feb 26 '24 11:02 DN6

My bad, Please see the edit above.

openSourcerer9000 avatar Feb 26 '24 11:02 openSourcerer9000

@openSourcerer9000 It's hard to debug without access to the actual checkpoint. Is your checkpoint hosted on Civit or the HF Hub or is it only available locally?

DN6 avatar Feb 26 '24 13:02 DN6

It's this one, v4 https://civitai.com/models/61086/vodka-by-followfoxai

On Mon, Feb 26, 2024, 4:18 PM Dhruv Nair @.***> wrote:

@openSourcerer9000 https://github.com/openSourcerer9000 It's hard to debug without access to the actual checkpoint. Is your checkpoint hosted on Civit or the HF Hub or is it only available locally?

— Reply to this email directly, view it on GitHub https://github.com/huggingface/diffusers/issues/7088#issuecomment-1964126261, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOYQCMXGUSBFO4AEXF7O5RLYVSDRNAVCNFSM6AAAAABDY52DFGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRUGEZDMMRWGE . You are receiving this because you were mentioned.Message ID: @.***>

openSourcerer9000 avatar Feb 26 '24 13:02 openSourcerer9000

Thanks! And what version of diffusers are you using? You can find out by running diffusers-cli env. I was able to load the checkpoint, so it might have something to do with your local Hugging Face cache.

DN6 avatar Feb 26 '24 13:02 DN6

Hey, I am facing a similar issue. The logs are below:

2024-03-27 11:52:28.219201: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

INFO:     Started server process [17816]
INFO:     Waiting for application startup.
Model located at ./weights/jugg.safetensors
Device: cpu
ERROR:    Traceback (most recent call last):
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file_utils.py", line 1480, in create_text_encoders_and_tokenizers_from_ldm
    text_encoder = create_text_encoder_from_ldm_clip_checkpoint(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file_utils.py", line 1173, in create_text_encoder_from_ldm_clip_checkpoint
    text_model.load_state_dict(text_model_dict)
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
        Missing key(s) in state_dict: "text_model.embeddings.position_ids".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 677, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "C:\Users\sarin\OneDrive\Desktop\banter-ai\stable-diff\app\main.py", line 25, in lifespan
    app.state.model = init_model()
  File "C:\Users\sarin\OneDrive\Desktop\banter-ai\stable-diff\app\models\init_model.py", line 37, in init_model
    pipe = StableDiffusionXLPipeline.from_single_file(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file.py", line 289, in from_single_file
    components = build_sub_model_components(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file.py", line 102, in build_sub_model_components
    text_encoder_components = create_text_encoders_and_tokenizers_from_ldm(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file_utils.py", line 1485, in create_text_encoders_and_tokenizers_from_ldm
    raise ValueError(
ValueError: With local_files_only set to False, you must first locally save the text_encoder and tokenizer in the following path: 'openai/clip-vit-large-patch14'.

ERROR:    Application startup failed. Exiting.

Also, the command diffusers-cli env gives below output:

- `diffusers` version: 0.27.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.4
- PyTorch version (GPU?): 2.1.2+cpu (False)
- Huggingface_hub version: 0.20.2
- Transformers version: 4.30.1
- Accelerate version: not installed
- xFormers version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no

saketsarin avatar Mar 27 '24 06:03 saketsarin

Hey, I am facing a similar issue. The logs are below:

2024-03-27 11:52:28.219201: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

INFO:     Started server process [17816]
INFO:     Waiting for application startup.
Model located at ./weights/jugg.safetensors
Device: cpu
ERROR:    Traceback (most recent call last):
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file_utils.py", line 1480, in create_text_encoders_and_tokenizers_from_ldm
    text_encoder = create_text_encoder_from_ldm_clip_checkpoint(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file_utils.py", line 1173, in create_text_encoder_from_ldm_clip_checkpoint
    text_model.load_state_dict(text_model_dict)
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
        Missing key(s) in state_dict: "text_model.embeddings.position_ids".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\starlette\routing.py", line 677, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "C:\Users\sarin\OneDrive\Desktop\banter-ai\stable-diff\app\main.py", line 25, in lifespan
    app.state.model = init_model()
  File "C:\Users\sarin\OneDrive\Desktop\banter-ai\stable-diff\app\models\init_model.py", line 37, in init_model
    pipe = StableDiffusionXLPipeline.from_single_file(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file.py", line 289, in from_single_file
    components = build_sub_model_components(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file.py", line 102, in build_sub_model_components
    text_encoder_components = create_text_encoders_and_tokenizers_from_ldm(
  File "C:\Users\sarin\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\loaders\single_file_utils.py", line 1485, in create_text_encoders_and_tokenizers_from_ldm
    raise ValueError(
ValueError: With local_files_only set to False, you must first locally save the text_encoder and tokenizer in the following path: 'openai/clip-vit-large-patch14'.

ERROR:    Application startup failed. Exiting.

Also, the command diffusers-cli env gives below output:

- `diffusers` version: 0.27.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.4
- PyTorch version (GPU?): 2.1.2+cpu (False)
- Huggingface_hub version: 0.20.2
- Transformers version: 4.30.1
- Accelerate version: not installed
- xFormers version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no

Resolved this guys. Turns out some of the libraries were incomptaible. Re-installed everything with the requirements.txt file I had and then some libraries manually then it worked.

saketsarin avatar Mar 27 '24 07:03 saketsarin

Resolved this guys. Turns out some of the libraries were incomptaible. Re-installed everything with the requirements.txt file I had and then some libraries manually then it worked

Can you tell me in more detail how you solved it? I've reinstalled all the libraries, but I can't find any incompatibility. Is there a library I should pay particular attention to?

uguryilmazdev avatar Apr 16 '24 21:04 uguryilmazdev

Resolved this guys. Turns out some of the libraries were incomptaible. Re-installed everything with the requirements.txt file I had and then some libraries manually then it worked

Can you tell me in more detail how you solved it? I've reinstalled all the libraries, but I can't find any incompatibility. Is there a library I should pay particular attention to?

Hey! Sorry for the delay. Actually I already had a working version before so I just reverted back some libraries back to the working versions. I don't exactly remember which ones, but mostly it was the incompatibility between the diffusers version and pytorch version. Once I had these compatible with each other, there were some other pytorch related libraries which the response from installation helped me resolve the correct versions for. I do not have this project anymore but I'll let you know if I can find the file with those compatible versions. Hope this helped you.

Best, Saket

saketsarin avatar Apr 27 '24 16:04 saketsarin

Hi @uguryilmazdev could you please share a code example to reproduce this issue if you're still facing it? Also please run diffusers-cli env and share the output here.

DN6 avatar Apr 29 '24 05:04 DN6

Hi @uguryilmazdev could you please share a code example to reproduce this issue if you're still facing it? Also please run diffusers-cli env and share the output here.

I apologise for writing so late. I had to concentrate on another project. So I've stopped working on it for now.

uguryilmazdev avatar May 01 '24 09:05 uguryilmazdev

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Sep 14 '24 15:09 github-actions[bot]