Does hls4ml currently support transform architectures?
Does hls4ml currently support transform architectures now? I saw in the paper Ultra Fast Transformers on FPGAs for Particle Physics Experiments that MHA support will be made available in the near future.
It's not available yet, we expect a pull request integrating this code in the next weeks.
I'm looking forward to it, thank you!
Greetings @JanFSchulte, any chance this integration has happened already since Oct? Thanks
Hello,
Has the integration happened since October 2024?
Some experimental support in PRs: #1163 for multi-head attention and #1110 for LayerNorm.