yy-victory

Results 3 comments of yy-victory

clip.load("ViT-L/14", device=device)在anaconda3/envs/pytorch/lib/python3.10/site-packages/clip-1.0-py3.10.egg/clip/clip.py的120行,可以更改其路径`model_path = _download(_MODELS[name], download_root or os.path.expanduser("yourpath_root/clip"))` CLIP-ViT-bigG-14-laion2B-39B-b160k/open_clip_pytorch_model.bin好像是在anaconda3/envs/pytorch/lib/python3.10/site-packages/open_clip/factory.py文件的create_model方法279行的if pretrained:判断语句下加下面的代码 `if pretrained == 'laion2b_s39b_b160k' and model_name == 'ViT-bigG-14': pretrained = 'yourpath/CLIP-ViT-bigG-14-laion2B-39B-b160k/open_clip_pytorch_model.bin'`

Thanks for answering my question, the unet_checkpoint_path is trained on webvid_2m_val, the profile I trained this checkpint with is image_finetune .yaml, the reason why the profile here is training.yaml, I...

> 您好,我遇到了类似的问题。我使用 webvid-2M(大约 5k 视频)的 vailation 来微调运动 lora。对于 50 个步骤,结果与您相似(不同的 imgs 变化很快)。对于 2.5k 步长,结果趋于稳定,帧之间没有差异。我认为我们使用小数据集,或者我忽略了一些配置。我也希望这个问题能得到解决。 Thank you for your reply. Increasing the number of training steps can indeed improve the results.