TensorRT
TensorRT copied to clipboard
chore: Update layer_norm converter to use INormalizationLayer
Description
Update the aten::layer_norm converter to use INormalizationLayer. Resolves warning about precision:
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Running layernorm after self-attention in FP16 may cause overflow. Exporting the model to the latest available ONNX opset (later than opset 17) to use the INormalizationLayer, or forcing layernorm layers to run in FP32 precision can help with preserving accuracy.
More closely matches behavior of onnx-tensorrt https://github.com/onnx/onnx-tensorrt/blob/main/builtin_op_importers.cpp#L2270
Covered by existing aten::layer_norm converter tests (updated to remove reshape which invalidates test).
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update
Checklist:
- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
@peri044 is CI broken on main? I'd be surprised if I caused these dynamo failures, but let me know if I should take a look.