Operator torch._ops.aten.linalg_vector_norm.default is not Aten Canonical
I am trying to create an Executorch Program using the steps mentioned at Setting Up Executorch. I made the changes successfully according to my model… But as I am executing the export script… I am getting this error:
torch._export.verifier.SpecViolationError: Operator torch._ops.aten.linalg_vector_norm.default is not Aten Canonical.
Looking back at my model, the only operator that might have caused this issue seems to be:
q = torch.nn.functional.normalize(q, dim=-1),
But I am not sure how exactly I deal with this error and am stuck at this step.
What could be a probable workaround for this case? Any suggestions/help is appreciated!
Regards!
It's supposed to an aten core op. Probably this pull request can help https://github.com/pytorch/pytorch/pull/125789 - we're trying to land it
@cccclai that's great to know! Would appreciate it if this update could be pushed in as this is something which blocking my efforts to benchmark my model on an edge device.
Thank you for your prompt response.
Before the PR is merged, workaround is calling these following ops manually. The same idea will be applied to https://github.com/pytorch/pytorch/pull/125789
def decomp_linalg_vector_norm(a, order):
# Compute the absolute values of the elements of 'a'
abs_a = torch.abs(a)
# Compute the sum of the absolute values raised to the power of order
sum_p = torch.sum(torch.pow(abs_a, ord=order))
# Compute the order-th root of the sum
norm_value = torch.pow(sum_p, 1/order)
return norm_value
Sure! I will give it a try.
def decomp_linalg_vector_norm(a, order=2): abs_a = torch.abs(a) sum_p = torch.sum(torch.pow(abs_a, ord=order)) norm_value = torch.pow(sum_p, 1/order) out = torch.div(sum_p/norm_value) return out
Resolves the issue for me. Since, there is already a PR looking in to adding this as part of core aten operator. closing this.