xla
xla copied to clipboard
Forward `XLATensorImpl::is_contiguous_custom` to `TensorImpl`.
This PR fixes #7998. Instead of always returning true, we forward this call to the base class TensorImpl::is_contiguous_custom()
.
The reason is that after https://github.com/pytorch/pytorch/pull/135498 is merged, XLA tensors' metadata might stop reflecting on the actual XLA storage. Which means that the tensors' strides might not always be contiguous. Whenever that happens, tensor.is_contiguous()
call should be consistent with the tensors' strides.
cc @miladm @JackCaoG @alanwaketan