Automatic type inference for `param_t` in Parametrised Activations
This small PR implement the inference of W and I parameter for a given floating point constant. It is exploited in parametrised activations
Type of change
- [x] New feature (non-breaking change which adds functionality)
Tests
I run some tests related to Parametrised Activations, already present in the pytests of hls4ml.
Checklist
- [x] I have read the guidelines for contributing.
- [x] I have commented my code, particularly in hard-to-understand areas.
- [ ] I have made corresponding changes to the documentation.
- [x] My changes generate no new warnings.
- [x] I have installed and run
pre-commiton the files I edited or added. - [ ] I have added tests that prove my fix is effective or that my feature works.
I see some tests related to oneAPI fails; it's hard to me to understand why they fail, how should I proceed?
If you have a linux setup it should be pretty straightforward to install oneAPI, and then you can run the pytest. But we can wait to look at the other issues first. Maybe it will clear itself.
Note the utility we have in hls4ml already: https://github.com/fastmachinelearning/hls4ml/blob/main/hls4ml/utils/fixed_point_utils.py#L139-L141
I wanted to try to install oneAPI myself, so I played with this PR a bit. The issue seems to be that the precision for the parameter of the leaky ReLU is reduced significantly, from typedef ac_fixed<16,6,true> quantizer_param_t; to a one-bit typedef ac_fixed<1,0,false> quantizer_param_t;. Vivado and the other backends seem to be able to handle it, but I'm not sure it makes sense in this case because we have negative slopes here and need it to be signed. The other backends seem to be able to deal with it. But for oneAPI, a signed variable is enforced to have at least two bits:
signed _BitInt must have a bit size of at least 2
So we need to make sure to take this into account when inferring the precision for the parameters.
Hey @nghielme any news on this one?
I'll take a look soon
Please check the logic of the precision setting.
I added a unit test to cover the various options, so I am more confident. It did discover an error in the max setting for unsigned FixedPrecisionType, which I fixed, and am including here, though it's logically independent.
There were some weird pytest failures that I'm rerunning, but otherwise I think this can be merged now.
Looks good to me. One small note, I think the test could be rewritten in a more pytest way, like this:
@pytest.mark.parametrize(
"val, expected_width",
[
(0, 1),
(-1024, 2),
(1024, 1),
(0.03125, 1),
(-0.03125, 2),
(1.25, 3),
(-1.25, 4),
(1.1, 8),
(-1.1, 9),
]
)
def test_precision_from_constant_unit(val, expected_width):
"""Test determining precision needed for a constant."""
max_width = 8
fp = _get_precision_from_constant(val, max_width)
assert fp.min <= val <= fp.max
assert fp.width == expected_width
assert fp.signed == (val < 0)
quantum = 2.0 ** -fp.fractional
if expected_width < max_width:
assert val % quantum == 0
Tests have only the "expected" failures now, so I think this is ok. I agree with Nicolo's comment on the pytest though, so if you could integrate that before merging that would be great, Jovan.
I just put Nicolo's test in the pytests instead of the one I had (with minor pre-commit changes). I also updated to the latest main so ideally the test failures should be gone.
Looks good now. @jmitrevs since you have the changes requested, you'll need to merge it.