Wu Ruiqi
Wu Ruiqi
You can use [https://huggingface.co/docs/diffusers/using-diffusers/sdxl](https://huggingface.co/docs/diffusers/using-diffusers/sdxl) to generate the first frame. Our project uses a lower version of diffusers, and SD-XL is integrated in a higher version of diffusers. Aligning the two...
Learning multiple actions on limited data is difficult. In my experiments, learning a single action can also cause performance degradation if the given data is very different. Some continuous learning...
Thanks for your interest! We will release the code for video editing in 3-5 days. Please stay tuned to our repo!
> Hi, Thanks for sharing the great work! I woul dlike to learn how to try video editing after training? Hi~The code of video editing is released! Sorry for my...
> can lamp support lora? each lora for a motion Maybe lora a T2V model is more reasonable, our lamp is based on a T2I model. Whatever, you can give...
可以参考basicsr的readme进行训练,训练模型的命令都已经给出了。你可以在预训练的VQGAN的基础上进行网络的训练,预训练的代码和获得最后调整参数的代码,我尽量考完试后开源吧,虽然我已经鸽了好多次了。。。
抱歉啊,我在模型的量化和部署上没什么经验,这个还得您自己弄一下吧。 要是您弄好了如果方便的话,可以联系我一下,我把您弄的onnx权重也开源出来~
MSBDN和Dehamer是在RESIDE 上训练的,D4还用他那一套GAN流程训练的,都是用作者自己release的模型测试的。 FADE代码用的是原论文里提供的链接,其它指标都是用Pytorch IQA提供的代码测试的
> 请问您的测试代码是 https://live.ece.utexas.edu/research/fog/index.html(FADE), https://github.com/chaofengc/IQA-PyTorch (BRISQUE, NIMA)吗, 我在RTTS原本的4300多张hazy图片上试出来和您有一些差别,尤其是NIMA,hazy图就有4.5了。我需要比较准确的数据,麻烦了。 是的,我用的是以上的代码,按理说不应该有问题。是否是opencv读图片的时候通道反了?有空可以加一下13645548058的微信,我帮你看看。
> 请问如何获取或者预训练出- CHM_weight.pth呢,我在您放置的预训练模型中只看见了pretrained_HQPs.pth,pretrained_RIDCP.pth,weight_for_matching_dehazing_Flickr.pth三个,但是看您的readme文件上是需要- CHM_weight.pth的,还有,您公开的数据集链接中只有depth——500和rgb_500,请问这两个数据集是需要通过什么训练过程先处理吗还是可以直接用呢,以上问题可能您均有提到但是我没有看仔细,如有打扰,请您见谅!非常期待您的回复,谢谢! 得到CHM的代码一直还没弄,之前写的代码很乱,然后最近也比较忙,另外代码在老主机上一直没去整理,我尽量尽早弄出来吧。weight_for_matching_dehazing_Flickr.pth就是CHM的weight。 数据集的深度图是RA_depth处理的,论文里有写。