chenying99

Results 6 comments of chenying99

I tried to implement mobilevit v2 with tensorflow 2.x, and layernorm uses layers.LayerNormalization(epsilon=1e-6). Comparing the output of each layer, I found that it is inconsistent with the output of the...

Later, I changed to tfa.layers.GroupNormalization(Addons), set groups=1, and checked the output of each layer, which is consistent with the layernorm of the pytorch version; https://www.tensorflow.org/addons/api_docs/python/tfa/layers/GroupNormalization ---------------------------------------------------------------------------- I checked the transformer...

还请教一个问题: 损失函数boundary_loss_func = DetailAggregateLoss() 调用boundary_loss_func(detail8, lb),其中作为ground truth的lb是包括,ignore_idx=255的,也就是包含了ignore_idx=255的轮廓信息; 而另一个损失函数criteria_p = OhemCELoss(thresh=score_thres, n_min=n_min, ignore_lb=ignore_idx),计算交叉熵的时候,忽略计算类别为ignore_lb=ignore_idx的损失, 请教在模型预测的时候,类别为ignore_idx(=255)的像素块,模型会预测成什么类别呢?

还请教一个问题,损失函数OhemCELoss,如果在单个gpu上训练,n_min应该设置为多少呢? 谢谢

> > 我发现STDC中的optimizer与BiSeNet的optimizer相比多传入了boundary_loss_func,输出看了下,请问是因为在训练过程中对初始设置的融合权重0.6,0.3,0.1进行优化吗? > > 是的,因为它里面有可训练的参数 您好,请问 里面有可训练的参数 是这个么? self.fuse_kernel = torch.nn.Parameter(torch.tensor([[6./10], [3./10], [1./10]], dtype=torch.float32).reshape(1, 3, 1, 1)) .type(torch.cuda.FloatTensor))

thank you for your share 1) the file inference.py line in 123 125: if (output_composition is not None) and (output_type == 'video'): if bgr_source is not None and os.path.isfile(bgr_source): if...