diffusers
diffusers copied to clipboard
convert_stable_diffusion_checkpoint_to_onnx.py doesn't convert
Describe the bug
convert_stable_diffusion_checkpoint_to_onnx.py doesn't convert .ckpt to .onnx and throws
raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.")
OSError: It looks like the config file at 'K:\dreamshaper_33.ckpt' is not a valid JSON file.
Reproduction
python convert_stable_diffusion_checkpoint_to_onnx.py --model_path "K:\dreamshaper_33.ckpt" --output_path "onnx_model"
dreamshaper_33.ckpt from: https://civitai.com/models/4384/dreamshaper
Logs
Traceback (most recent call last):
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\configuration_utils.py", line 380, in load_config
config_dict = cls._dict_from_json_file(config_file)
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\configuration_utils.py", line 480, in _dict_from_json_file
text = reader.read()
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\automatic111\scripts\convert_stable_diffusion_checkpoint_to_onnx.py", line 265, in <module>
convert_models(args.model_path, args.output_path, args.opset, args.fp16)
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\automatic111\scripts\convert_stable_diffusion_checkpoint_to_onnx.py", line 79, in convert_models
pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 462, in from_pretrained
config_dict = cls.load_config(
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\configuration_utils.py", line 382, in load_config
raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.")
OSError: It looks like the config file at 'K:\dreamshaper_33.ckpt' is not a valid JSON file.
System Info
Python 3.10.6 diffusers 0.12.1 Windows 10 Home 64bit
Convert Stable Diffusion model to ONNX format
Some Models are not available in Onnx format and will need to be converted.
Install wget for Windows
- Download wget for Windows and install the package.
- Copy the wget.exe file into your C:\Windows\System32 folder.
Convert Original Stable Diffusion to Diffusers (Ckpt File)
- Example File to Convert: Anything-V3.0.ckpt
- Download the latest version of the Convert Original Stable Diffusion to Diffusers script
- Run
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./model.ckpt" --dump_path="./model_diffusers"
Notes:
- Change
--checkpoint_path="./model.ckpt"
to match the ckpt file to convert - Change
--dump_path="./model_diffusers"
to the output folder location to use - You will need to run Convert Stable Diffusion Checkpoint to Onnx (see below) to use the model
Convert Stable Diffusion Checkpoint to Onnx
- Example File to Convert: waifu-diffusion
- Download the latest version of the Convert Stable Diffusion Checkpoint to Onnx script
- Run
python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="./model_diffusers" --output_path="./model_onnx"
- Change
--model_path="./model_diffusers"
and--output_path="./model_onnx"
https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674#convert-stable-diffusion-model-to-onnx-format
This script also doesn't work. I used Anything-V3.0.ckpt from https://huggingface.co/andite/anything-v4.0/blob/main/Anything-V3.0-pruned.ckpt
Traceback (most recent call last):
File "C:\test\convert_original_stable_diffusion_to_diffusers.py", line 103, in <module>
pipe = load_pipeline_from_original_stable_diffusion_ckpt(
File "C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 824, in load_pipeline_from_original_stable_diffusion_ckpt
raise ValueError(BACKENDS_MAPPING["omegaconf"][1])
KeyError: 'omegaconf'
Run
wget https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml
then
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./model.ckpt" --dump_path="./model_diffusers" --original_config_file="./v1-inference.yaml"
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="Anything-V3.0-pruned.ckpt" --dump_path="hej" --original_config_file="v1-inference.yaml"
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ K:\convert_original_stable_diffusion_to_diffusers.py:103 in <module> │
│ │
│ 100 │ parser.add_argument("--device", type=str, help="Device to use (e.g. cpu, cuda:0, cud │
│ 101 │ args = parser.parse_args() │
│ 102 │ │
│ ❱ 103 │ pipe = load_pipeline_from_original_stable_diffusion_ckpt( │
│ 104 │ │ checkpoint_path=args.checkpoint_path, │
│ 105 │ │ original_config_file=args.original_config_file, │
│ 106 │ │ image_size=args.image_size, │
│ │
│ C:\Users\LOGIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\stable │
│ _diffusion\convert_from_ckpt.py:824 in load_pipeline_from_original_stable_diffusion_ckpt │
│ │
│ 821 │ """ │
│ 822 │ │
│ 823 │ if not is_omegaconf_available(): │
│ ❱ 824 │ │ raise ValueError(BACKENDS_MAPPING["omegaconf"][1]) │
│ 825 │ │
│ 826 │ from omegaconf import OmegaConf │
│ 827 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'omegaconf'
Thanks you but same thing.
I did:
pip install omegaconf
and seems this script is doing some work.
I had to run pip install safetensors omegaconf
and checkout to the v0.12.1
branch for convert_original_stable_diffusion_to_diffusers.py
to work.
Thanks @ry167 . It kinda worked but created a whole dir with lots of files. It should rather be just one .onnx file.
cc @anton-l @echarlaix can we maybe make it easier?
Thank you @averad for a great tutorial! This is a good candidate for a docs PR :smiley:
The point of friction here is that the onnx conversion script doesn't mention that it's not intended for ckpt->onnx, so the whole pipeline would need to be ckpt->diffusers->onnx
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.