xla
xla copied to clipboard
Set ShardingSpec standard, and generalize it
As part of the "xla::OpSharding", I found two instances where it was actually being abstracted within PytorchXLA:
- tensor_common.h: torch_xla::ShardingSpec
- tensor.h/cpp: ShardingSpec
Both instances do basically the same thing, and partially what the plan for this bug is. We should look into:
- Merging both "ShardingSpec" definitions into one to decrease redundancy
- Generalizing it for the rest of xla::OpSharding.
I am planning on adding more to https://github.com/pytorch/xla/issues/9181, and make it more of an RFC which will help put everything together