Chris Jones

Results 4 comments of Chris Jones

> > * What would be the most important factors for you when considering a migration from `torch_xla` to this new stack or from PyTorch on GPU? > > *...

> A key advantage of PyTorch/XLA has always been the performance gains achieved by optimizing the computation graph as a whole. With Eager Mode becoming the default alongside true just-in-time...

> > if you don't materialize any tensors in eager mode in most cases it will defer everything and send an entire computational graph to XLA. > > how can...

TorchTPU will use the PJRT API similar to PyTorch/XLA and JAX.