Yijie Lin

Results 18 comments of Yijie Lin

@lywinaaa The missing data is constructed by [get_mask.py](https://github.com/XLearning-SCU/2021-CVPR-Completer/blob/ad37110046234341dab63b0ed0c862680a69e27a/get_mask.py#L6C1-L6C1) and we have released a three view version in [2022-TPAMI-DCP](https://github.com/XLearning-SCU/2022-TPAMI-DCP).

What is the version of your pytorch? This issue appears to be unprecedented.

The code was tested with torch = 1.2.0. Could you consider downgrading your PyTorch installation to version 1.2.0 or a version close to it?

or you could change the line loss.backward(torch.ones_like(loss)) to loss = torch.sum(loss) loss.backward() This might work

Add two lines in [config.py](https://github.com/Thinklab-SJTU/ThinkMatch/blob/b3efd1b5710545ed6e8573ba8c9294b4f2da4e21/src/utils/config.py#L258) could make it work. The issue seems to be related to changes in the YAML loading package. ``` if 'RESCALE' in yaml_cfg['PROBLEM']: yaml_cfg['PROBLEM']['RESCALE'] = tuple(yaml_cfg['PROBLEM']['RESCALE'])...

您好,def get_mask(view_num, alldata_len, missing_rate) 中missing_rate缺失率定义的是多少百分比的数据有缺失,而第三行missing_rate = 0.5则是保证有缺失的这些样本中,缺失视图的比例大致为0.5。 如果设置missing_rate = 0.5,两视图数据可以刚好产生一半视图的缺失,三视图情况下则通过随机mask去逼近50%的概率,具体实现借鉴的[CPM_Net](https://github.com/hanmenghan/CPM_Nets/blob/master/util/get_sn.py)。 > Given a dataset with v views, we randomly select m samples as incomplete data and randomly remove 1 to...

对的不能删除的,两个missing_rate的含义是不一样的。前面是指我们论文中所指的缺失率(有多少sample有缺失),后者0.5是仅用来生成mask的。抱歉产生了误解。

感谢您的关注! 更准确的来说,信息熵H(Z)度量了表征Z的不确定度。最大化H(Z)对于处理类别不平衡、长尾的数据集(caltech)有一定帮助,可以减轻小类塌缩到大类的情况。 对于不同的数据集,最优的lamb值可能不同,因为每个数据集的特征和分布都不同,Z的不确定度也不应相同。对于非常平衡的数据集,lamb的效果可能会受限。同时,lamb也应该考虑到Z的维度设置。