Sunflower7788
Sunflower7788
您好,由于问题超过一周,将自动关闭。若有问题可以重新打开,或者重新提issue。感谢理解。
欢迎尝试使用PaddleX 新版本进行训练,https://aistudio.baidu.com/intro/paddlex
请关注即将发布的PaddleSeg v2.8新版本,将支持SAM。后续结合交互分割等工作,敬请期待。
请补充下运行命令
请给出具体执行命令 环境等
Yes,Iknow. In focal loss.py loss_ = -1 * np.power(1 - pro_, self.gamma) * np.log(pro) .I know the loss metric just a display and dont affact the model. But I think...
I think softmaxloss = -sum(yilog(pi)). But label is one hot. So softmaxloss = -log(pt)
您好,问题超过一周未回复将关闭。如有问题可重开issue.
Thanks. But i want to know, is this the data division firstly proposed in this paper? Why this data division diffierent from semi-supervised detection for comparing the results of Boxes...
> As I know, this dividion is firstly proposed in "Data Distillation: Towards Omni-Supervised Learning". Got it. Thanks.