aimet
aimet copied to clipboard
QNN Error with GroupNorm
Hi, I was wondering how are you guys dealing with GroupNorm. I discovered the following weird situation in S8G2 DSP (HTP, chipset: SM8550).
GroupNorm(groups=32) |
1, 64, 512, 512 | 1, 64, 256, 256 | 1, 64, 128, 128 | 1, 64, 64, 64 |
|---|---|---|---|---|
| msec | 1528.3 | 371.1 | error | 0.794 |
The test used QNN 2.16.
Do you know how can I report that bug to the QNN team?
Thanks!
Tagging @quic-akinlawo @quic-mangal here.
It may sound odd, but what can I do it's what it's, i.e., hardware & on-device stuff aka too many moving wheels
I can't replicate the error for input 1,64,128,128. The model using such op compile & latency performance is shown in the screenshot below. First column is msec & second the version of QNN
Details (for record)
- I cannot rplicate the QNN compilation error anymore
- I tried different versions of QNN & onnx opset version.
- It's possible that the version of onnx & PyTorch itself plays a role. I wasted some time testing things out. But, decided to move on & enjoy the good news.