hls4ml icon indicating copy to clipboard operation
hls4ml copied to clipboard

Latency Pooling Header Updates

Open calad0i opened this issue 1 year ago • 10 comments

Precision override moved to Requires #855 now. Expect tests to fail without.

Include and superceeds #917. On the same issue: #947

This PR does the following, for the vivado and vitis backends.

  • make avg pooling layers in io_parallel mode respect to accum_t.
  • refactoring codes for pooling layers for io_parallel.
    • Performance validated to be equal or better.
    • The new implementation avoids a few potential overflowing in tmp and scale in the original avg pool implementation. ~~- Avoid overriding global variable names when automatically overriding precisions; derive accum_t properly from impending layer's result_t.~~ Moved to #855

With the current setting, pooling layers will be bit-accurate if the impending layer's result_t is configured properly.

Synthesis on some toy models table

Type of change

  • [x] Bug fix (non-breaking change that fixes an issue)

Tests

Will be covered in #914 (WIP).

Checklist

  • [x] I have read the guidelines for contributing.
  • [x] I have commented my code, particularly in hard-to-understand areas.
  • [ ] I have made corresponding changes to the documentation.
  • [x] My changes generate no new warnings.
  • [x] I have installed and run pre-commit on the files I edited or added.
  • [ ] I have added tests that prove my fix is effective or that my feature works.

calad0i avatar Feb 21 '24 05:02 calad0i

pre-commit.ci autofix

calad0i avatar Feb 21 '24 05:02 calad0i

I wonder if this is part of the inferring of precisions that's in the works: #855. Let me try to update #855 (by making pull requests to it) so we can merge it in soon.

jmitrevs avatar Feb 21 '24 21:02 jmitrevs

There seem to be some real pytest failures associated with this PR. @calad0i, can you take a look?

jmitrevs avatar Apr 15 '24 16:04 jmitrevs

Fixed vitis and mnist tests. Pytorch test directly crashed, need some debugging to see what is wrong.

calad0i avatar Apr 15 '24 20:04 calad0i

test/pytest/test_weight_writer.py is not updated in this PR though... pre-commit.ci autofix

calad0i avatar Apr 26 '24 22:04 calad0i

Don't worry about the test_weight_writer.py

jmitrevs avatar Apr 26 '24 22:04 jmitrevs

I am not sure if our pytest environment is functional. I can try running them again, but they may fail for infrastructure reasons

jmitrevs avatar Apr 26 '24 22:04 jmitrevs

The test_qkeras.py::test_qgru[Vivado] failure we can ignore. For cnn_mnist, why does this error cause the failure? We mentioned about possibly removing that test, but I want to understand why it's failing. Would setting the accumulator to auto fix the issue?

jmitrevs avatar May 03 '24 23:05 jmitrevs

The test_qkeras.py::test_qgru[Vivado] failure we can ignore. For cnn_mnist, why does this error cause the failure? We mentioned about possibly removing that test, but I want to understand why it's failing. Would setting the accumulator to auto fix the issue?

Correct. The reason to fail is exactly overflow in the avg pool, which had an overriding mechanism but is now removed. Adding a line hls_config['LayerName']['avg_pool']['Precision']['accum'] = 'auto' in the test and name the layer properly will fix it.

calad0i avatar May 04 '24 01:05 calad0i

Let's add the fix to the test and maybe merge this? I think making auto a default for some layer types could be a different PR, and I think it's really a change in the config_from_keras_model functionality.

jmitrevs avatar May 16 '24 16:05 jmitrevs

We can fix the failing test_cnn_mnist.py with:

(fastml310) jovans-mac:pytest jmitrevs$ git diff
diff --git a/test/pytest/test_cnn_mnist.py b/test/pytest/test_cnn_mnist.py
index ab3365f2..a8b77862 100644
--- a/test/pytest/test_cnn_mnist.py
+++ b/test/pytest/test_cnn_mnist.py
@@ -70,6 +70,8 @@ def test_mnist_cnn(keras_model, mnist_data, backend, io_type, strategy):
     hls_config = hls4ml.utils.config_from_keras_model(keras_model, granularity='name', backend=backend)
     hls_config['Model']['Strategy'] = strategy
     hls_config['LayerName']['softmax']['Implementation'] = 'stable'
+    hls_config['LayerName']['average_pooling2d']['Precision']['accum'] = 'auto'
+    hls_config['LayerName']['max_pooling2d']['Precision']['accum'] = 'auto'
     output_dir = str(test_root_path / f'hls4mlprj_cnn_mnist_{backend}_{io_type}_{strategy}')
 
     hls_model = hls4ml.converters.convert_from_keras_model(

jmitrevs avatar May 31 '24 14:05 jmitrevs