czhxiaohuihui
czhxiaohuihui
I am wondering how to visualize attention as Figure 6 in your paper? Can you share code with this?
Can you explain how to use Large-Scale-VRD to extract visual relation embedding exactly?Very Thanks!!!!
I use the code in misc/read_floorplan.py to process the RPLAN dataset, but I find the bboxes is not computed,and always [] !!!!! So, how to get the bboxes bboxes, edges,...
我在自己的数据集上试了一下,bert的效果大概是85%,textCNN是79%, 然后用蒸馏大概只有77.8%. 蒸馏相关的两个参数都是按照你代码里的: self.T = 10 # 调整温度 self.alpha = 0.9 # 调整soft_target loss 和 hard_target loss 比重