Fangneng Zhan
Fangneng Zhan
@vglsd thanks for your reply. But it seems that dataset in this new link doesn't match the pre-processing code, e.g. there is no index_ade20k.mat file in this dataset.
Thanks for the feedback, I will check and update it in the next week.
Hi, I just update the spe part in https://github.com/fnzhan/RABIT/blob/658c6af2cbea1d6dbff87e769f7875250a47840b/data/pix2pix_dataset.py#L72
Seems you set --warp_mask_losstype cycle in training, you may follow the default setting of --warp_mask_losstype direct I have corrected the bug in generator.py
Hi, I have a deadline recently and will include the code of this part early next month.
Hi, I update the implementation of top-k including ot_topk and differentiable_topk in models/networks/ranking_attention.py. You can also directly test the top-k function in util/topk_test.py.
Hi, the ot_weight is actually the weight of negative pairs in contrastive learning.
please refer to Utils/SWD/cal_sliced_wasserstein.py.
> please refer to Utils/SWD/sliced_wasserstein.py
yes, as you described.