amithrm

Results 5 issues of amithrm

This pull request enables the code needed to integrate torchrun launcher with xla backend.

This a cumulative PR with misc bug fixes and updates to Zero Redundancy Optimizer from all the authors (AWS): Guangtai Huang, Rahul Solanki, Fei Wu, Amith Mamidala

### 🚀 The feature, motivation and pitch ### Motivation SPMD sharding in pytorch/XLA offers model parallelism by sharding tensors within an operator. However, we need a mechanism to integrate this...

### 🐛 Describe the bug **The crash seen is the following:** WARNING: All log messages before absl::InitializeLog() is called are written to STDERR F0000 00:00:1709131242.311197 36940 hlo_sharding.cc:1034] Check failed: IsTuple()...