InstantX
InstantX
Updated.
Are you using our [code](https://github.com/InstantStyle/InstantStyle/blob/main/ip_adapter/ip_adapter.py#L67)? We make some minimum modifications on original IP-Adapter codebase.
@g711ab You should clone our repo or replace the original tencent-ailab/IP-Adapter with [ours](https://github.com/InstantStyle/InstantStyle/tree/main/ip_adapter).
Here are some [general suggestions](https://huggingface.co/docs/diffusers/en/optimization/memory), not every method works under our testing. But `pipe.enable_vae_tiling()` does reduce memory consumption by about 3GB.
We have added an [experimental distributed inference](https://github.com/InstantStyle/InstantStyle?tab=readme-ov-file#distributed-inference) feature from diffusers.
For ComfyUI problem, please report at [here](https://github.com/cubiq/ComfyUI_InstantID).
In diffusers, you load LoRA and then load IPA.
We have been natively supported in diffusers, usage can be found [here](https://github.com/InstantStyle/InstantStyle?tab=readme-ov-file#use-in-diffusers). You can directly use img2img pipeline and load InstantStyle as instruction.
It's as expected. Playground2.5 has different distribution as the original SDXL, so that it cannot inherit plugins from SDXL. We may release checkpoints for Playground2.5 later.