Yang Zhou

Results 9 comments of Yang Zhou

@maria-korosteleva , rotation theta for joint i should only cause translation change to child joints of joint i, but not for joint i, if we look into the formula T(i)...

Thanks for the reply @maria-korosteleva , but the shape will cause the change to shape blended vertices -> cause change to resting joint locations (joint regressor) -> cause change to...

Thanks. I am using it for my own highly customized implementation, I have figured out the error I encountered. It seems the implementation of quantization aware training may have some...

The way to simulate the quantization conv is kind of approximate, which is different from the TensorFlow paper. I think the big difference between the approximating and tensflow, is the...

I am doing experiments on the customized model, wish this approximation could work. I found for my customized model, the fake_bn will significantly slow down the loss descent, once I...

A followup question from my last question. Assuming fused batchnorm is faster, and assuming the training can lead to the same performance. Why all the frameworks directly add an option...

Yes, I have been careful with the "fake_bn", e.g., by-pass-bn. My practical finding is that it is harmful for training, it indeed slows down the model converge (in terms of...

Yes, I am referring to "fused batchnorm" as the TensorFlow paper for the quantization aware training. But your implementation for fake_bn actually is "fused batchnorm"

I have found some TensorFlow quantization aware training code here "https://github.com/tensorflow/tensorflow/blob/e4262fb2fbf1cb33aaea79ff81754d1e92e99af1/tensorflow/contrib/quantize/python/fold_batch_norms.py#L344" " """Computes batch norm correction params. Before batch normalization is frozen: We use batch statistics for batch norm. correction_scale...