Dongsheng Jiang
Dongsheng Jiang
can you provide the parameter setting for finetune on 384 from 224?
Hi, I just change your code with ResNet50(num_classes=10, resolution=(224, 224)), which end with a lower accuaracy of 90.15%. do you have other changes to get the 9511%?
I found swin_large_patch4_window7_224.yaml config file in your code. Here is an interesting question that how about the performance for larger mode?
没有任何改动,训练发现,每个iteration的效果,颜色都在变化,虽然也有动漫的效果,但是神韵和平滑性都不如作者提供的模型效果。还有什么trick吗?
I eval vit_base of 500/1600 pretraining on imagenet1000 using knn metric. By loading all the pretained parameter with vit GAP method (not need cls token), the knn 20-NN result is...
it seems Unet in your code can outperform CE_Net in my training. I directly used your code and retrained the Unet, I got a accuracy of [acc: 0.956 | sen:...
你好,这篇利用HDRNET来做样式迁移的应用,我也在复现。目前有一些效果。文章细节缺失,代码也没有公开。复现挺难的。从文章看,要用6次AdaIN,然后如果正则损失过小,也会产生很多偏色。所以regW我设置为100000。最后的效果其实更像是整体颜色的迁移,因为hdrnet本身就是保边滤波,无法迁移样式。
I want to know that your reimplments can have similiar results with the original paper? If ok, it is awesome!
when to release the training code of EfficientViT-SAM?
it seems there are no train and test main code, please add them for reimplemention.