b4sh

Results 10 comments of b4sh

I'm not a programmer, but I've experimented a bit with xformers and sequential CPU offload. Without xformers the demo didn't work at all on my card (RTX4070). After using xformers...

> how did you fix? Without looking at your code, I can't tell you how to fix it. Just a note, I use Linux running on WSL2, maybe that matters....

Sequential CPU offload removes the model from VRAM, so after each generated image the model would have to be reloaded from disk. I don't know if this is a good...

> awww behind a paywall, :-( Try this. https://github.com/cubiq/ComfyUI_InstantID - IMHO the best implementation so far. It runs on a 12G VRAM card without any problem and does not use...

> > Rarely, it can run on an RTX 4070 with 12 GB. > > How? I have a 12gb 4070ti and it does not run. Even with everything closed,...

> Could you share some of the optimizations you run? https://huggingface.co/docs/diffusers/main/optimization/memory#memory-efficient-attention And FaceAnalysis must be run on CPU - providers=['CPUExecutionProvider']

I created fork with my mods. This is tested only on Linux (WSL2) and RTX4070. https://github.com/b4sh/InstantID/tree/main You need install everything from scratch. Clone my repo, make venv and install requirements...

@camoody1 my every run takes about 60sec, no OOM error. And yes, this SDXL pipeline (SDXL checkpoint + IP_Adapter + ControlNet) need more than 12G VRAM, so part of the...

Works for me. It's slow and VRAM hungry (RTX4070), some SDXL base checkpoint not working (out of memory). Insightface loader set to CUDA results very slow generation. ComfyUI running on...

@yjktech Try different SDXL checkpoint. For me works sdxl-base-1.0 (RTX4070 - 12GB)