IF
IF copied to clipboard
Internal: unk is not defined
I have a problem that I don't solve very well,The official example cannot be run
watermarker\diffusion_pytorch_model.safetensors not found
A mixture of fp16 and non-fp16 filenames will be loaded.
Loaded fp16 filenames:
[unet/diffusion_pytorch_model.fp16.bin, text_encoder/pytorch_model.fp16-00002-of-00002.bin, safety_checker/pytorch_model.fp16.bin, text_encoder/pytorch_model.fp16-00001-of-00002.bin]
Loaded non-fp16 filenames:
[watermarker/diffusion_pytorch_model.bin
If this behavior is not expected, please check your folder structure.
Loading pipeline components...: 14%|█▍ | 1/7 [00:00<00:01, 5.81it/s]You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False
. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Loading pipeline components...: 43%|████▎ | 3/7 [00:00<00:00, 16.48it/s]
Traceback (most recent call last):
File "E:\pycode\test.py", line 7, in
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
File "E:\pycode\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1105, in from_pretrained
loaded_sub_model = load_sub_model(
File "E:\pycode\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 472, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "E:\pycode\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1854, in from_pretrained
return cls.from_pretrained(
File "E:\pycode\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 2017, in from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "E:\pycode\venv\lib\site-packages\transformers\models\t5\tokenization_t5.py", line 194, in init
self.sp_model = self.get_spm_processor()
File "E:\pycode\venv\lib\site-packages\transformers\models\t5\tokenization_t5.py", line 199, in get_spm_processor
tokenizer.Load(self.vocab_file)
File "E:\pycode\venv\lib\site-packages\sentencepiece_init.py", line 905, in Load
return self.LoadFromFile(model_file)
File "E:\pycode\venv\lib\site-packages\sentencepiece_init.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: unk is not defined.
我已经解决了,因为...cache\huggingface\hub下的模型文件夹中的.json文件(symlink类型)有错误,从huggingface重新下载后放到指定的文件夹中就可以运行了
您好,请问是大模型里面的json文件有问题吗
您好,请问是大模型里面的json文件有问题吗
不是json有问题,是模型本身的文件有问题