Bhavya Bahl
Bhavya Bahl
Fall back to cpu implementation for dot product if both the tensor are int64. #6700
The issue is to track 2.4 release backport. For any PRs you want to backport to 2.4, please reply with following: Original PR link Reason to backport 2.4 backport PR...
For #8946 I was able to train a resnet model after building the wheel from source. ``` (torch312) ➜ xla git:(master) ✗ python examples/train_resnet_base.py WARNING:root:libtpu.so and TPU device found. Setting...
The following test corresponding to shard_as is failing. Disabling to unblock pin update for release ``` @unittest.skipIf( xr.device_type() == 'CPU', "sharding will be the same for both tensors on single...
tpu-info CLI tests crash with python 3.12 The reason for the crash is related to libtpu import and not torch_xla. Creating this issue just to track that we should re-enable...
## 🐛 Bug `python test_torch.py -v TestTensorDeviceOpsXLA` is part of the CPU CI and it doesn't run any tests. This given a non zero exit code causing the CI test...