hls4ml
hls4ml copied to clipboard
Latency Pooling Header Updates
Precision override moved to Requires #855 now. Expect tests to fail without.
Include and superceeds #917. On the same issue: #947
This PR does the following, for the vivado
and vitis
backends.
- make avg pooling layers in
io_parallel
mode respect toaccum_t
. - refactoring codes for pooling layers for
io_parallel
.- Performance validated to be equal or better.
- The new implementation avoids a few potential overflowing in
tmp
andscale
in the original avg pool implementation. ~~- Avoid overriding global variable names when automatically overriding precisions; deriveaccum_t
properly from impending layer'sresult_t
.~~ Moved to #855
With the current setting, pooling layers will be bit-accurate if the impending layer's result_t
is configured properly.
Synthesis on some toy models
Type of change
- [x] Bug fix (non-breaking change that fixes an issue)
Tests
Will be covered in #914 (WIP).
Checklist
- [x] I have read the guidelines for contributing.
- [x] I have commented my code, particularly in hard-to-understand areas.
- [ ] I have made corresponding changes to the documentation.
- [x] My changes generate no new warnings.
- [x] I have installed and run
pre-commit
on the files I edited or added. - [ ] I have added tests that prove my fix is effective or that my feature works.
pre-commit.ci autofix
I wonder if this is part of the inferring of precisions that's in the works: #855. Let me try to update #855 (by making pull requests to it) so we can merge it in soon.
There seem to be some real pytest failures associated with this PR. @calad0i, can you take a look?
Fixed vitis and mnist tests. Pytorch test directly crashed, need some debugging to see what is wrong.
test/pytest/test_weight_writer.py
is not updated in this PR though...
pre-commit.ci autofix
Don't worry about the test_weight_writer.py
I am not sure if our pytest environment is functional. I can try running them again, but they may fail for infrastructure reasons
The test_qkeras.py::test_qgru[Vivado]
failure we can ignore. For cnn_mnist, why does this error cause the failure? We mentioned about possibly removing that test, but I want to understand why it's failing. Would setting the accumulator to auto fix the issue?
The
test_qkeras.py::test_qgru[Vivado]
failure we can ignore. For cnn_mnist, why does this error cause the failure? We mentioned about possibly removing that test, but I want to understand why it's failing. Would setting the accumulator to auto fix the issue?
Correct. The reason to fail is exactly overflow in the avg pool, which had an overriding mechanism but is now removed.
Adding a line hls_config['LayerName']['avg_pool']['Precision']['accum'] = 'auto'
in the test and name the layer properly will fix it.
Let's add the fix to the test and maybe merge this? I think making auto a default for some layer types could be a different PR, and I think it's really a change in the config_from_keras_model
functionality.
We can fix the failing test_cnn_mnist.py
with:
(fastml310) jovans-mac:pytest jmitrevs$ git diff
diff --git a/test/pytest/test_cnn_mnist.py b/test/pytest/test_cnn_mnist.py
index ab3365f2..a8b77862 100644
--- a/test/pytest/test_cnn_mnist.py
+++ b/test/pytest/test_cnn_mnist.py
@@ -70,6 +70,8 @@ def test_mnist_cnn(keras_model, mnist_data, backend, io_type, strategy):
hls_config = hls4ml.utils.config_from_keras_model(keras_model, granularity='name', backend=backend)
hls_config['Model']['Strategy'] = strategy
hls_config['LayerName']['softmax']['Implementation'] = 'stable'
+ hls_config['LayerName']['average_pooling2d']['Precision']['accum'] = 'auto'
+ hls_config['LayerName']['max_pooling2d']['Precision']['accum'] = 'auto'
output_dir = str(test_root_path / f'hls4mlprj_cnn_mnist_{backend}_{io_type}_{strategy}')
hls_model = hls4ml.converters.convert_from_keras_model(