heat
heat copied to clipboard
Enable dndarray-torch.tensor operations
the way we do it right now is e.g., ht.zeros(5).larray + torch.randn(5)
and then wrap it into a (distributed) dndarray again. It would be less frustrating if this step were taken care of internally.
Issue created from a Mattermost message by @ClaudiaComito.
I see two scenarios:
-
ht.zeros(5) + torch.randn(5) -> ht.DNDarray
-
ht.zeros(5).tensor() + torch.randn(5) -> torch.Tensor
The .(to|as)tensor
function (or .view(torch.Tensor)
) should cast a DNDarray to a torch.Tensor without copying the data, if possible (read if allocated on the same node), or optionally provide copy=True
flag. Having such views onto PyTorch tensors means that inplace operations such as
a = ht.zeros(5)
a.tensor().add_(torch.arange(5))
modify the data in a
.
Thanks @dizcza ,
note that option 2. is possible already as:
ht.zeros(5).larray + torch.randn(5) -> torch.Tensor
(in the current main branch. The syntax in 0.5.0 is
ht.zeros(5)._DNDarray__array + torch.randn(5) -> torch.Tensor
its non-intuitiveness reflects the fact that we only used it internally so far)
You're totally right though that ht.zeros(5).tensor()
is a more intuitive solution.
In principle very useful, but maybe not of highest priority (but only since there is some workaround).
(Reviewed within #1109)