ZhengyuXia
Results
2
comments of
ZhengyuXia
I had the same issue, but fixed it by commenting/removing the definition for pretrained. For example, in pvt_v2.py: class pvt_v2_b0(PyramidVisionTransformerV2): def __init__(self, **kwargs): super(pvt_v2_b0, self).__init__( patch_size=4, embed_dims=[32, 64, 160, 256],...
有同样的困惑,找了PPliteSeg的yml文件作为参考配置,手动按照rtformer的设定调参,用4块3090进行了训练,效果低了近7个百分点。