IF
IF copied to clipboard
Autodisable xformers if using Torch2 in readme example
It's really strange to ask the users (who may not be well versed in coding) to perform a code editing action if it can be accomplished with a simple check
Hmm, guess this becomes a bit philosophical, but we also want to explain, show to users how things work and how they can best make use of existing tools instead of doing everything auto-magically.
It's a bit related to the PyTorch philosophy of "Simple over Easy": https://pytorch.org/docs/stable/community/design.html#principle-2-simple-over-easy
To better understand why it's not need for PT 2.0 here one can have a look at: https://pytorch.org/blog/accelerated-diffusers-pt-20/
I believe the advanced users will read the code comment anyway, but it's creating additional barriers for people who will just want to run the testing snippet and be satisfied with it. In contrast to your comment, the readme gives no explaination on why it should not be used. This only leads to further mystifying the AI tech, imho
Imagine if it was the case for larger projects such as StableDiffusion or text2video models where the users would be required to modify each attention block for the thing to work on their pcs
Some users may not be even knowing that their torch version is. To check it they will have to either do some command-line kung-fu or even make a new python file where they will have to add
import torch
print(torch.__version__)
and then execute it in commandline as well. They will need to look up the solutions in the internet as well. And it's simply an unneeded frustration for common folks what will either do the things above or wait for a GUI tool. These things will spoil their initial exposure to the product
I'm with @kabachuha, it makes no sense to have the weird comment about the necessity to edit attention blocks.
Ok for me to change, up to the maintainers of this repo :-)