Meatfucker
Meatfucker
Curious if it is possible to use textual inversion embeddings from stable-diffusion, and if so how I would go about doing so.
I have attempted to get this project running on an Ubuntu machine with a M40, as well as a windows machine with a 3090, and on both when I try...
I managed to get this running on windows. The two primary issues were the checkpoint script is for bash, which can be solved by either installing bash or downloading the...
Changed environment.yaml to use a compatible transformers version. Made a batch script to download the checkpoint on Windows. Note this will only work on Windows 10 or higher.
Very basic batch scripts for 30b and 66b on machines with 32gb of system ram and 24gb of vram. In order to get 66b to run youll need to increase...
This includes changes to fix the precision issues in the training scripts, as well as a modification to the conda creation command so it pulls in a complete requirement list,...
This alters the DialogGen loading to use bitsandbytes 4bit quantization. This reduces overall memory usage and makes inference possible on 24gb consumer GPUs with DialogGen enabled.
### Describe the bug When using the FluxPriorReduxPipeline the prompt_embeds_scale and pooled_prompt_embeds_scale seem to have no effect on the final generation. ### Reproduction ``` async def get_redux_embeds(image, prompt, strength): redux_repo...
### Describe the bug If you try to use ZImagePipeline with batch sizes above 1, it fails with an Assertion error ``` ### Reproduction import torch from diffusers import ZImagePipeline...
Id like to see magcache acceleration supported if possible. My own testing shows it to cut video generation times in half with almost no quality loss at all. It seems...