update keras activation parsing, especially leaky relu
Description
Issue #1076 showed that hls4ml didn't parse leaky_relu passed as a parameter (activation='leaky_relu') in Keras properly. One could argue that it was not explicitly described in v2 of the API documentation, but it was in the code, and it is officially described in v3 of the API, so we should support it.
As an aside, I don't think we support relu with an alpha parameter properly--maybe I should add a check to change relu to leaky_relu in that case.
I noticed also that the Activation layer handler was not called for activations passed as arguments. Special cases had to be handled in two places, once in the Activation layer handler, once in the activations passed as arguments section. This I believe is error-prone, and elu passed as an argument would break the oneAPI backend if I don't duplicate code. I think it is better to unify and always call the Activation layer handler, so that is what I did instead.
Type of change
- [x] Bug fix (non-breaking change that fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
Tests
I added a leaky_relu passed as argument test, and it's also important not to break the other tests.
Checklist
- [x] I have read the guidelines for contributing.
- [x] I have commented my code, particularly in hard-to-understand areas.
- [ ] I have made corresponding changes to the documentation.
- [x] My changes generate no new warnings.
- [x] I have installed and run
pre-commiton the files I edited or added. - [x] I have added tests that prove my fix is effective or that my feature works.
I also added support for the new v3 API parameter names for leaky relu.
I want to understand the need for the assert change, so putting it to draft for now.