Haojin Yang
Haojin Yang
Thank you for the comments! I want to explain further. Due to the scale of the course (up to 10k participants) and the control of the learning quality (maybe most...
Another limitation of the current Sedna is as mentioned in my first post. We cannot ask up to 10k clients to start the federated learning task at roughly the same...
> @kaivu1999 that's a really good discussion you brought up and has been of interest since the release of BMXNetv1. > pointing out the findings of @yanghaojin that for training...
> Hi~I am reproducing BinaryDenseNet in the paper. When I go through the code, I find three version of densenet called densenet, densenet_x and densenet_y, what is the exact version...
感谢回复!我总结一下bolt实现的过程,麻烦您给指正: - activation input(NCHW, fp16) -> (N_C/8_H_W_c8, fp16)-> bit-packing using [transformFromHalf](https://github.com/huawei-noah/bolt/blob/c3eb7a22e4f7acc2cc450606d1875666d4b11574/common/uni/include/data_type.h#L90) to (N_C/8_H_W_c8) / 8, bint8, -> further steps in [convolution_xnor_A55](https://github.com/huawei-noah/bolt/blob/c3eb7a22e4f7acc2cc450606d1875666d4b11574/compute/tensor/src/cpu/arm/bnn/convolution_xnor_A55.cpp#L69) - weight (NCHW, fp32) -> bit-packing via model converter...
> 是不是卷积需要padding,padding的值是什么?这个有影响,可以先比较一下没有padding的卷积层结果。 我测试专门去掉了padding。请问bolt中的unit test是否建立过使用(-1,1)做参数以及用乘加计算模拟二值化计算的对齐测试?谢谢