Long(Tony) Lian

Results 82 comments of Long(Tony) Lian

请修改config的bak文件为wechat.conf

I guess you need to have 3 frames in the config for the multi-view checkpoint.

I also found that the downloaded log files omit some lines with the above `tqdm` effect. However, these lines are present in the web view of the log tab, which...

What is your setting? It works on my end. ``` To create a public link, set `share=True` in `launch()`. 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:30

Seems that the decoder VAE on the refiner is somehow not fp16. Did you change any config? You can also disable refiner to see if it still happens.

Seems like it still works if the models are not offloaded. ``` $ OFFLOAD_BASE=false OFFLOAD_REFINER=false python app.py Loading model stabilityai/stable-diffusion-xl-base-1.0 Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01

Since you loaded custom weights, it's possible that somehow fp32 weights are loaded. You probably want to check whether you have fp16 weights (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main/vae). If you loaded fp32 weights, you...

Since I could not reproduce this, could you show me your diff?

I believe it will re-encode so it's applied on the latents. The implementation shows that images are transformed to latents prior to processing: https://github.com/huggingface/diffusers/blob/af48bf200860d8b83fe3be92b2d7ae556a3b4111/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L841 I believe this is their recommended...

> there is an output_type="latent" parameter In this way you are right, the process of converting the latents to image can be skipped. > How much VRAM is needed if...