torch-mlir
torch-mlir copied to clipboard
TOSA implementation of `gelu` is not precise enough for x < -0.5
If the input to the e2e test for gelu is changed to use a distribution with values in range [-1, 0], the test fails when using the TOSA backend.
****** Failed tests - 1 tests
FAIL - "ElementwiseGeluModule_basic"
@ trace item #0 - call to "forward"
@ output of call to "forward"
ERROR: value (Tensor with shape=[5, 3], dtype=torch.float32, min=-0.003645, max=-0.0003783, mean=-0.00127) is not close to golden value (Tensor with shape=[5, 3], dtype=torch.float32, min=-0.002967, max=-0.0001385, mean=-0.0008288)