Support SPMD placeholder tensors
This PR is an extension to the placeholder feature https://github.com/pytorch/xla/issues/8612 that extends the functionality to accommodate sharded tensors for SPMD. It simultaneously fixes a typo in the existing binding for collecting placeholder tensors.
Refer to https://github.com/pytorch/xla/pull/8785 for the existing survey around how to leverage the object's address as the handle address for placeholder tensors. In addition, we also introduce a placeholder specific handling for mark sharding, as it currently entails an async data transfer. Note that for sharded data, we not do generate PjRtShardedData sharded objects in the BackendData.
This allows users to leverage placeholder tensors for staging computations in their program, without invoking any data transfer or PJRT buffers under the hood.