Chris Malone

Results 35 comments of Chris Malone

I got brown outputs with 512 and 768 ckpts, tried running the 768 diffusers output with the official sd 2.0 diffusers pipeline and still the same

> fixed now ![162](https://user-images.githubusercontent.com/6998259/204462176-4a919039-7200-4a0d-b239-8b05c1c64c30.jpg) Getting outputs like this with the latest version at 768 resolution. Previously was getting brown outputs on 512 and 768 whether I used the diffusers model...

I rewrite the notebook to run locally since I don't have much room on gdrive, so it's a completely new install

Actually it's working now in diffusers. Forgot I made it replace the A1111 repo with diffusers because it was getting OOM. Diffusers wasn't updated. Tried the original using A1111 colab...

I assumed we would just be running the smaller models on our own GPU without distributed training. Any chance an rtx 4080 can run 13B if we trade off VRAM...

> > @Bylaew @shaowin16 I updated requirements.txt to comment out most of the possibly non-universal, conflict-causing dependencies. Please try again after pull. > > Installed /Users/shaowin/.pyenv/versions/3.10.0/lib/python3.10/site-packages/metagpt-0.1-py3.10.egg Processing dependencies for metagpt==0.1...

Fixed by installing pandas (had to install 1.4.1)

Supposedly NF4 gets up to 4x speedup, less vram for even higher precision and is now the recommended format. Seems like this might actually be doable now? https://civitai.com/models/638572/nf4-flux1 https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981