Anatoly Belikov
Anatoly Belikov
### Describe the bug LORA training don't work with mixed precision enabled: File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 348, in _memory_efficient_at tention_forward_requires_grad inp.validate_inputs() File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/xformers/ops/fmha/common.py", line 121, in validate_inputs raise ValueError( ValueError: Query/Key/Value...
Implement sketch inpaint from a1111 for non-inpaint models. This PR implement the same algorithm as here for https://github.com/huggingface/diffusers/pull/4824 but this pipeline has more parameters
Long-prompt weighting pipeline can't be used with other pipelines e.g. StableDiffusionKDiffusionPipeline this PR moves long-prompt weighting code to utils so that long-prompt weighting can be used with any pipeline: ```...
Sometimes it is useful to have graph or tree for inference. Like source -> rule -> rule -> rule -> result Usecases: - theorem proving - planning
just add some .to(device) calls
KAN implementation overrides https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.train If i embed KAN in a pytorch module and call a train on it it causes error, because KAN's train expects a dict
### Before submitting your bug report - [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions - [X] I'm not able to find...