DeepHash-pytorch icon indicating copy to clipboard operation
DeepHash-pytorch copied to clipboard

How to speed up the loss computation in DTSH

Open ssqiao opened this issue 3 years ago • 1 comments
trafficstars

Hi, swuxyj. Nice work for this community. It is noted that the training loss of DTSH contains a for loop which is somewhat time-consuming. Is there any change to speed up this op? It seems that the for loop can be parallelized.

ssqiao avatar Aug 02 '22 03:08 ssqiao

Hi, swuxyj. Nice work for this community. It is noted that the training loss of DTSH contains a for loop which is somewhat time-consuming. Is there any change to speed up this op? It seems that the for loop can be parallelized.

可以参考其他人的实现,目前这个版本已经是我能想到的最优的方式了

swuxyj avatar Aug 29 '22 07:08 swuxyj