diffusers
diffusers copied to clipboard
OSError: Can't load tokenizer for 'stabilityai/stable-diffusion-xl-base-1.0'
Describe the bug
code is : #DiffusionPipeline import torch from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, AutoPipelineForText2Image prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
#åŠ è½½SDXL base model pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=False, local_files_only=False ) pipe.to("cuda")
image = pipe(prompt=prompt,output_type="latent").images[0]
the error is OSError Traceback (most recent call last) Cell In[4], line 9 6 prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" 8 #åŠ è½½SDXL base model ----> 9 pipe = StableDiffusionXLPipeline.from_pretrained( 10 "stabilityai/stable-diffusion-xl-base-1.0", 11 torch_dtype=torch.float16, 12 variant="fp16", 13 use_safetensors=False, 14 local_files_only=False, 15 proxies=proxies 16 ) 17 pipe.to("cuda") 19 image = pipe(prompt=prompt,output_type="latent").images[0]
File ~/.conda/envs/diffuser310/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:118, in validate_hf_hub_args.
File ~/.conda/envs/diffuser310/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:821, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
818 else:
819 # load sub model
820 print('load_sub_model:', name, library_name, class_name)
--> 821 loaded_sub_model = load_sub_model(
822 library_name=library_name,
823 class_name=class_name,
824 importable_classes=importable_classes,
825 pipelines=pipelines,
826 is_pipeline_module=is_pipeline_module,
827 pipeline_class=pipeline_class,
828 torch_dtype=torch_dtype,
829 provider=provider,
830 sess_options=sess_options,
831 device_map=device_map,
832 max_memory=max_memory,
833 offload_folder=offload_folder,
834 offload_state_dict=offload_state_dict,
835 model_variants=model_variants,
836 name=name,
837 from_flax=from_flax,
838 variant=variant,
839 low_cpu_mem_usage=low_cpu_mem_usage,
840 cached_folder=cached_folder,
841 )
842 logger.info(
843 f"Loaded {name} as {class_name} from {name}
subfolder of {pretrained_model_name_or_path}."
844 )
846 init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...)
File ~/.conda/envs/diffuser310/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py:476, in load_sub_model(library_name, class_name, importable_classes, pipelines, is_pipeline_module, pipeline_class, torch_dtype, provider, sess_options, device_map, max_memory, offload_folder, offload_state_dict, model_variants, name, from_flax, variant, low_cpu_mem_usage, cached_folder) 473 loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) 474 else: 475 # else load from the root directory --> 476 loaded_sub_model = load_method(cached_folder, **loading_kwargs) 478 return loaded_sub_model
File ~/.conda/envs/diffuser310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2033, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, trust_remote_code, *init_inputs, **kwargs) 2027 logger.info( 2028 f"Can't load following files from cache: {unresolved_files} and cannot check if these " 2029 "files are necessary for the tokenizer to operate." 2030 ) 2032 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()): -> 2033 raise EnvironmentError( 2034 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from " 2035 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " 2036 f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " 2037 f"containing all relevant files for a {cls.name} tokenizer." 2038 ) 2040 for file_id, file_path in vocab_files.items(): 2041 if file_id not in resolved_vocab_files:
OSError: Can't load tokenizer for 'stabilityai/stable-diffusion-xl-base-1.0'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-xl-base-1.0' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
Reproduction
#DiffusionPipeline import torch from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, AutoPipelineForText2Image proxy = 'http://ntproxy.qa.nt.ctripcorp.com:8080' proxies = {'http': proxy, 'https': proxy} prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
#åŠ è½½SDXL base model pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=False, local_files_only=False, proxies=proxies ) pipe.to("cuda")
image = pipe(prompt=prompt,output_type="latent").images[0]
Logs
No response
System Info
diffusers==0.27.1 python==3.10 linux
Who can help?
No response
That repository only has the safetensors checkpoints, you need to use use_safetensors=True
That repository only has the safetensors checkpoints, you need to use
use_safetensors=True
I changed the use_safetensors to True , but have the same errors. And i download all files from https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main, not only the safetensors
hmm so you're saying you manually downloaded the files? In that case you should put the path to where you downloaded the files.
Also that proxy doesn't work for me with https
.
With this code, it works for me:
import torch
from diffusers import StableDiffusionXLPipeline
proxy = "ntproxy.qa.nt.ctripcorp.com:8080"
proxies = {"http": proxy}
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
# åŠ è½½SDXL base model
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True,
local_files_only=False,
proxies=proxies,
)
pipe.to("cuda")
image = pipe(prompt=prompt, output_type="latent").images[0]
Also check as the error says:
make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-xl-base-1.0' is the correct path to a directory containing all relevant files
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.