solitaryTian

Results 5 comments of solitaryTian

Hello! I am also interested in this problem. Have you solved it?

> Hi, Thank you for your brilliant work and beautiful code. I found most of your tutorial scripts and codes to be quite self-explanatory, but I do not find code...

> [GGUF](https://huggingface.co/docs/hub/en/gguf) is becoming a preferred means of distribution of FLUX fine-tunes. > > Transformers recently added general support for GGUF and are slowly adding support for [additional model types](https://github.com/huggingface/transformers/issues/33260)....

Set dtype = torch.bfloat16 in this demo. Then run the demo again. Locate the new error and set q, k, v to the same dtype

> "negative_prompt": "明亮的色调、曝光过度、静态、细节模糊、字幕、风格、作品、绘画、图像、静态、整体灰色、质量最差、质量低、JPEG 压缩残留、丑陋、不完整、多余的手指、画得不好的手、画得不好的脸、变形、毁容、肢体畸形、手指融合、静止图像、背景混乱、三条腿、背景中有很多人、向后走", "infer_steps": 17, " target_video_length": 81, "target_width": 832, "target_height": 480, "self_attn_1_type": "flash_attn2", "cross_attn_1_type": "flash_attn2", "cross_attn_2_type": "flash_attn2", "seed": 2050392903, "enable_cfg": true, "sample_guide_scale": 5, "sample_shift": 5, “cpu_offload”:false, “offload_granularity”:“block”, “offload_ratio”:1,...