calad0i
calad0i
`kernel_quantizer="quantized_bits(16,6)"` is the problem. By default, QKeras quantizers allows the training of a scaling factor that tends to mess everything up (with hls4ml). You will need to set `kernel_quantizer="quantized_bits(16,6,alpha=1.)"`.
Currently, there is an `accum_t` set for pooling layers which is only effective for `io_stream` implementation: [vivado_backend.py#L376-L380](https://github.com/calad0i/hls4ml/blob/eb70a61d0da77d880de055065e3837de5846cc72/hls4ml/backends/vivado/vivado_backend.py#L376-L380), [nnet_pooling_stream.h#L386](https://github.com/calad0i/hls4ml/blob/eb70a61d0da77d880de055065e3837de5846cc72/hls4ml/templates/vivado/nnet_utils/nnet_pooling_stream.h#L386). (Which could also be problematic at the moment, see #917.) Maybe we...