black_cat
black_cat
my environment doest have internet,so i download the model by zip i change the code in txt2img.py,this is useful > cache_dir="/root/.cache/huggingface/hub/models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/snapshots/94a64189c3535c1cb44acfcccd7b0908c1c8eb23" model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'),cache_dir=cache_dir) hope that is...
Hi,Have you find the method to finetune x4-upscaling model?
I have the same issue about that,either task I have tested is same output as input.
官方文档里好像还不支持Qwen1.5 baseline只有Qwen
> I use CLI `python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus GPU_ID, --name NAME --scale_lr False` and i get killed  At first ,i reduce the batch_size and...
@IceClear Thanks for your reply!! maybe is this problem,but if i cant change me cpu memory,is there have alternative choice?
> Maybe you can check if the model ckpt is loaded twice since in my current setting, the diffusion model and vqgan are initialized from the same large ckpt and...
> > > Maybe you can check if the model ckpt is loaded twice since in my current setting, the diffusion model and vqgan are initialized from the same large...