CVPR21_PASS
CVPR21_PASS copied to clipboard
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Hello, I wonder why the output and soft_feat_aug should be divided by args.temp when compute ce loss?
It seems that the prototype augmentation is very similar to semantic augmentation. Are there any specific differences in details? Implicit semantic data augmentation for deep networks. NIPS.
Hello , I wonder why the code in train() function writing like this? ```python target = torch.stack([target * 4 + k for k in range(4)], 1).view(-1) ``` instead of using...