kanttouchthis
kanttouchthis
the repo was renamed from IF-I-IF to IF-I-XL changing ``` pipe = IFImg2ImgPipeline.from_pretrained( "DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16, ) ``` to ``` pipe = IFImg2ImgPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16, ) ``` fixes this....
by changing `device = "cuda:0"` to `device = "cpu"` you can run the code on cpu without using VRAM, however that is going to be extremely slow. I recommend using...
your best options are torch.compile for better inference time at the cost of compile time (pytorch>=2.0.0 on linux only, though i haven't noticed significant improvements), or to reduce the number...
it looks like it's trying to import tensorflow, which shouldn't be necessary for this repo. maybe you have an old version of tf installed? try uninstalling that
did you install cuda toolkit and cudnn? i believe those are needed, but im not entirely sure
> Give them time to fix mistakes. The release just happened there is a lot to get right. it's not a mistake. while the code is open source, the actual...
> > There are 3, one per stage. There is an embedded comment that says to remove this line > > Hm. I grepped for it, saw only one. Do...
i had the same issue, my solution was to switch to pytorch 2.0.0+cu118 and disable the xformers memory efficient attention
> Same here. I want to use xformers because I want to run deepfloyd on anything less than torch v2. If I don't use it I get an OOM error....
i'm running Pillow==9.3.0 and i don't have this issue