Request: Need models in onnx and .pt format (not just .bin and .config)
I was requesting the model in another format because I cannot convert it without the proper model configuration file (I've tried) Need them in onnx or .pt format specifically for a Unity application called Depthviewer. Can we make this happen?
Here is a list of models and their formats available, as you can see depth-anything has onnx, I was hoping marigold could profile this also https://airtable.com/appjWiS91OlaXXtf0/shrchKmROzpsq0HFw/tblviBOLphAw5Befd
I converted Marigold to ONNX and uploaded to HF in case it's useful to you: https://huggingface.co/julienkay/Marigold
would love to have it in CoreML too!
Please share scripts to automate these steps here, we will consider including these harness bits into the repository at a later stage.
@julienkay I see the folder structure is different: the VAE encoder and decoder were moved to the top level. Is there a way to keep the original structure? If this is done intentionally, how is this onnx checkpoint used?
Essentially I've just used the conversion script from diffusers.
Here is the code (probably not all packages required)
!pip install wheel wget
!pip install git+https://github.com/huggingface/diffusers.git
!pip install transformers onnxruntime onnx torch ftfy spacy scipy accelerate
!pip install onnxruntime-directml --force-reinstall
!pip install protobuf==3.20.2
!python -m wget https://raw.githubusercontent.com/huggingface/diffusers/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py -o convert_stable_diffusion_checkpoint_to_onnx.py
!mkdir model
!python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="Bingxin/Marigold" --output_path="model/marigold_onnx"
@julienkay I see the folder structure is different: the VAE encoder and decoder were moved to the top level. Is there a way to keep the original structure? If this is done intentionally, how is this onnx checkpoint used?
Afaik, separate encoder/decoder seems to be the "standard" way for onnx based pipelines in diffusers. Same as the OP my interest was mostly to use the onnx checkpoints in another inference framework (in this case Unity Sentis). I guess using them in python would require adding a separate onnx-specific pipeline like the ones found in diffusers/optimum for the most common pipelines like SD 1.5 / XL.