LaVieEnRoseSMZ

Results 11 comments of LaVieEnRoseSMZ

Thanks very much for your patience and detailed answer. I have spent days to reproduce your work using pytorch and I already gets 56.3% using your hyperparameter in wiki which...

I am referring to the url in the comments of arxiv paper [supplementary material](https://owncloud.hpi.de/s/1jrAUnqRAfg0TXH)

I still have one more problem in reading the code of [Binarylayer](https://github.com/hpi-xnor/BMXNet-v2/blob/683fb59d35c4f5b044662ccd912e991edd5ac4a8/python/mxnet/gluon/nn/binary_layers.py) I can not find the defination and implementation of "det_sign" which is used to quantize activation and weight....

One more question about gradient_cancel layer, does it exactly the same as the supplementary material describes: ![image](https://user-images.githubusercontent.com/17960617/62770264-43332c80-bacd-11e9-8a0e-60b99b5d5d46.png) Thanks a lot for your patient answering~

Hi~I am reproducing BinaryDenseNet in the paper. When I go through the code, I find three version of densenet called densenet, densenet_x and densenet_y, what is the exact version used...

Thanks for your attention for our paper. 1024 is a typo, it should be the number of classes and I have corrected this. Thanks for pointing it out. This network...

This is the work I have done in Huawei Noak' Ark lab. Recently I am also reproducing these work. And the training script I use is [this](https://github.com/akamaster/pytorch_resnet_cifar10/blob/master/trainer.py). The main difference...

The first layer is full-precision which means the FLOPs is calculated as 3*128*3*3*32*32=3538944=3.5M. We followed [BirealNet](https://arxiv.org/pdf/1808.00278.pdf), the calculation of ResNet18-Uniform1 and BirealNet-18 is quite close, but we quantizate pointwise conv...

Thanks for your appreciation! We have noticed that quantizing activation into positive numbers with HSwish is not optimal. And there are also papers like [LSQ+](https://arxiv.org/pdf/2004.09576.pdf) investigate this problem. It can...

Thanks for your interest in our work. We only implement LSQ in this framework as LSQ is the state-of-the-art quantization method. And We choose LSQ as the quantization scales need...