Andrew Ryzhkov
Andrew Ryzhkov
Yes, I did change the tokenizer to open_clip one, namely ViT-H-14, but I didn't notice a difference and revert it back. It could be just in my case, so you'd...
@arisha07 I didn't use the notebook to generate any images, sorry. No idea why it doesn't work. The code looks fine to me. Yes, I did change the parameter for...
@arisha07 No, I didn't, but I guess it shouldn't be a problem.
The latest diffusers version has Euler Ancestral. It's quite easy to incorporate it, it just needs some conversion code. It works even faster than DDIM and gives better results most...
Actually, it's possible to change the resolution. But you would have to convert the model to another one. Or if you need flexible resolution, you can add dynamic axes during...
@ClashSAN No, sorry, I meant the model converted with dynamic axes is slower. `torch.onnx.export(dynamic_axes={"init_image": {0: "batch", 1: "channels", 2: "height", 3: "width"}})` BTW, ONNX is also slower than OpenVINO IR.
Locally it works Ok, colab should too, but didn't try.
Very good! Yes, memory consumption depends on the resolution pretty much. But even with 128GB of RAM, I couldn't convert a model larger than 1024x768. I don't have a formula,...
@ClashSAN Ok, here's my version of the converter: https://github.com/RedAndr/SD_PyTorch2ONNX It has dynamic axes, so any resolution could be used. To have a constant resolution just comment lines with dynamic_axes. Let...
Only the most recent Intel CPUs support bfloat16.