Fix torch.operator names
This implements the fix suggested by @heshuju in https://github.com/llvm/torch-mlir/issues/4108.
It fixes an issue that was blocking the LLVM integrate in IREE. https://github.com/iree-org/iree/actions/runs/15626565095/job/44156258415?pr=21092
We should change this at the reduce op variants level instead.
https://github.com/llvm/torch-mlir/blob/0c29ccf1439c91c7a2175a167d4bdb2c01a03e63/lib/Dialect/Torch/Transforms/ReduceOpVariants.cpp#L240
Instead of matching to torch.aten._scaled_dot_product_flash_attention_for_cpu to aten._scaled_dot_product_flash_attention_for_cpu. Or preferably match with both. I am guessing that the ingestion can generate for both.
The above test fails because it assumes a torch prefix to the operator.