DFL-CNN
DFL-CNN copied to clipboard
Some issue about non-random Initialization and accuracy when reproducing your work
Hi, thanks for your kind work for implementing the CVPR18 paper DFL-CNN for the community. I notice that your code did not implement the non-random initialization part which the authors claimed it was very important. I found some similar issues like mine, and can I discuss some ideas about non-random initialization with your? In the original paper, the author first calculate the conv4_3 feature map and got C * H * W feature maps. Then they calculated l2-norm along the channel dimension and got H * W heat map. If I did not understand wrongly till now, how should I understand their following operations: for each class i, obtain the initialization weights for k 1 * 1 conv filters by non-maximium suppression and k-means? Does it mean the author first calculate the feature maps of all images in the training set and obtain C_k heat maps per class and performance non-maxium suppression and k-means over the peak regions in the C_k heat maps?
Furthermore, when I ran your code, I only got 56.6% test accuracy in the test set, I did not know what the problem was and it really confused me for a couple day, could you please help me tackle it?
Thanks for your job!
My test accuracy is 57%!
My test accuracy is 57%! hhh~~Interesting to find someone like me
@CZW123456 @deepblue0822 When I run the code always appear a question,could you give me some help?Thanks a lot! In train.py
for i, (data, target, paths) in enumerate(train_loader):←←←←←←←←←←there if args.gpu is not None: data = data.cuda() target = target.cuda()
AttributeError: Can't pickle local object 'get_transform_for_train.
@CZW123456 Can you please share the weights? I am not able to download the weights. Thanks
the same question. i have no idea how to operate the non-random initialization, could u please share the code?