hls4ml
hls4ml copied to clipboard
About QBatchNormalization is not support QKeras po2 quantizer
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
- [Y] Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
- [Y] Check that the issue hasn't already been reported, by checking the currently open issues.
- [Y] If there are steps to reproduce the problem, make sure to write them down below.
- [Y] If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.
Quick summary
When I use hls4ml to convert the QBatchnormalization layer, I will be prompted that the quantizer of po2 is not supported.
Details
My QBatchNormalization is using po2 quantizer, but hls4ml told me that it can't support the quantizer
Steps to Reproduce
- My QBN layer is defined as follows:
x = QBatchNormalization(
gamma_quantizer="quantized_relu_po2(bits=8)",
beta_quantizer="quantized_po2(bits=8)",
mean_quantizer="quantized_po2(bits=8)",
variance_quantizer="quantized_relu_po2(bits=8)",
axis=bn_axis, epsilon=1.001e-5, name="conv1_0_bn")(x)
- I only convert this one layer at a time via hls4ml
Expected behavior
I hope hls4ml can recognize all quantizers and convert successfully
Actual behavior
hls4ml will report :
Layer name: input_5, layer type: InputLayer, input shapes: [[None, 160, 160, 64]], output shape: [None, 160, 160, 64]
Unsupported quantizer: quantized_po2
Unsupported quantizer: quantized_relu_po2
Unsupported quantizer: quantized_po2
Unsupported quantizer: quantized_relu_po2
Layer name: conv2_block1_1_bn, layer type: QBatchNormalization, input shapes: [[None, 160, 160, 64]], output shape: [None, 160, 160, 64]
Possible fix
I browsed the relevant files and found the problem in line 11 of the file in the path "hls4ml/converters/keras/qkeras.py", I found that hls4ml has a special specification for the QBN layer.(https://github.com/fastmachinelearning/hls4ml/blob/07c5bb65bb0560ebf22c428b3357b4d794cbb327/hls4ml/converters/keras/qkeras.py#L11)
I remember the model parsing sometimes giving extra warnings in the past due to some convoluted logic but working in the past, so I wanted to make sure that wasn't the case. Did you confirm that the produced code is actually not working properly? That can well be the case, but I wanted to ask.