qonnx
qonnx copied to clipboard
Changed behavior after BatchNormToAffine transformation
Quick summary
A standalone BatchNormalization
node with certain settings (see .onnx file in .zip folder: bn_model.zip) changes its functional behavior, when transformed with the BatchNormToAffine
transformation.
Steps to Reproduce
- The issue was observed when using the FINN docker container, but with the current
main
branch of qonnx (commit hash:12c96a3ded06beacab08e0f554e4ed014476c0aa
). - Run transformation BatchNormToAffine on ONNX file.
- Execute model before and after the transformation with random floating point input
(x = gen_finn_dt_tensor(DataType["FLOAT32"], (1, 64, 64, 64)) inp_dict = {"global_in": x})
- Compare execution of the model before and after the transformation.
Expected behavior
The outputs before and after do not match.
Actual behavior
The functional behavior should not change due to the transformation.
Possible fix
It seems to be a rounding error, coming from this calculation: A = scale / np.sqrt(epsilon + variance)