fractal-fumbler

Results 36 comments of fractal-fumbler

Also on rocm. should it work with rocm? with new option `--upcastattn` i am getting error (full command `--opt-split-attention --opt-channelslast --medvram --precision upcast --upcastattn --opt-sub-quad-attention`) Traceback ```python Traceback (most recent...

tried after your latest commit :) 1. with only `--precision upcast` it is working, but with SD-2.1 model giving black images as result 2. with `--precision upcast --upcastattn` throws error...

build from sources > @fractal-fumbler I know you mentioned using ROCm and PyTorch 1.13.1, but did you install with the wheels package or was it built from source? It seems...

> @fractal-fumbler Please continue to post tracebacks when you have them. Actually the tracebacks are the only way I can have much of any idea what is going on, so...

neatto! i was able to generate picture with `txt2img` with this command Command ```python PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,roundup_power2_divisions:4,max_split_size_mb:128 PYTHONPATH=/tmp/stable-diffusion-webui python launch.py --theme=dark --opt-split-attention --opt-channelslast --always-batch-cond-uncond --medvram --opt-sub-quad-attention --precision upcast --upcast-attn ``` VRAM consumption...

testing further no-half vs upcast :) examples with `--no-half --precision full --no-half-vae` no-half ![photo_2023-01-13_01-55-21](https://user-images.githubusercontent.com/80472908/212198843-e9deecf1-1b19-434a-b605-49ce57b7d8cf.jpg) examples with `--precision upcast --upcast-attn` upcast ![photo_2023-01-13_01-55-25](https://user-images.githubusercontent.com/80472908/212198992-2a9677bd-a552-4d73-a673-4a5bf5d1a750.jpg)

> Please try using `--upcast_sampling` without `--precision` @brkirch, latest patch (96093475731c2f95fba3911ee66e5065deb21005) with 1. --opt-split-attention --opt-channelslast --always-batch-cond-uncond --medvram --opt-sub-quad-attention `--upcast-sampling --upcast-attn` gives black images 2. --opt-split-attention --opt-channelslast --always-batch-cond-uncond --medvram --opt-sub-quad-attention `--precision...

hadn't checked, but FYI, 96093475731c2f95fba3911ee66e5065deb21005 and 280083c9c30058d092c6b6f6aadac5e669b322fc didn't throw error with embedding being used. tho giving black images

> I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960d1ee4f9134c2516862ef991ec52d3f59e) you meant...