NeedsMoar

Results 138 comments of NeedsMoar

> Hi, Thanks for opening this post! At this point, windows support is best effort (it's not something we need internally). If you can make things compatible with windows, we...

``` xFormers 0.0.25 memory_efficient_attention.ckF: unavailable memory_efficient_attention.ckB: unavailable memory_efficient_attention.ck_decoderF: unavailable memory_efficient_attention.ck_splitKF: unavailable memory_efficient_attention.cutlassF: available memory_efficient_attention.cutlassB: available memory_efficient_attention.decoderF: available [email protected]: available [email protected]: available memory_efficient_attention.smallkF: available memory_efficient_attention.smallkB: available memory_efficient_attention.triton_splitKF: available indexing.scaled_index_addF: available indexing.scaled_index_addB:...

> all machines have GPU non-intel I'm not sure what this means, but do you have the manufacturer drivers from either AMD or NVidia installed, too? The default Windows drivers...

```python3 if not args.normalvram and not args.cpu: if lowvram_available and total_vram (current_free_mem - inference_memory): #only switch to lowvram if really necessary vram_set_state = VRAMState.LOW_VRAM ```

First off if you're ripping these yourself like a good citizen (downloading movies is bad, you should rob the local big box store blind instead). :D Save yourself a huge...

You need either CUDA 11.8 or any of the 12.x series (I just use 12.4 to get the newest compiler bugfixes, it's compatible with the 12.1 torch is built with.)...

It's pretty obvious, you have pytorch 2.1.0 installed and a version of xformers built for 2.1.2. pip uninstall torch torchaudio and torchvision xformers ... then follow the instructions on the...

Edit: Nevermind, I see you linked to a bug you filed over there. Comfy probably can't do much about this. Just FYI I'm running a CUDA card and that extension...

Try installing the flash-attention-2.3.6 py311 ada / sm_89 (not xformers) wheel from my link on the discussions page you posted on (yeah it's from last December, doesn't seem to matter)....

Yeah that was forcing it to use pytorch attention which was using the installed flash-attention-2 on my system via Torch's lazy-as-hell loading mechanisms where flash isn't even checked for when...