Pearu Peterson

Results 177 comments of Pearu Peterson

Re the single CI test failure: it appears unrelated and rebase most likely resolves it.

I have always wondered why sparse compressed conversions return a copy while sparse or strided conversions return `self` when layouts do not change. For instance: ```python >>> a=torch.tensor([[1, 2], [3,...

Notice that the following works: ```python csr = torch.sparse_csr_tensor((0, 1, 2), (0, 1), (1, 1), dtype=torch.float32, requires_grad=True) csr2 = csr.to_sparse(layout=torch.sparse_csr).detach().requires_grad_(True) x = torch.ones((2, 1), dtype=torch.float32) y = torch.matmul(csr2, x) z...

> It does not seem to work for me, prints `None`. Yes, returning `None` from `csr.grad` makes sense. I have updated the example, the print statement should read `print(csr2.grad)`. This...

@alexsamardzic any idea why CI tests fail? It looks like `to_sparse` aten implementation is never called...

This is a known issue: https://github.com/pytorch/pytorch/issues/86232 Meanwhile, one can use `torch.sparse.sum` function that implements `dim` and `keepdim` support: ```python >>> torch.sparse.sum(s, dim=0) tensor(7) ```

FYI, https://github.com/heavyai/heavydb-internal/pull/7315 introduces `ST_AsText` and `ST_AsBinary`. So, the issue should be resolved in the next heavydb release.

PyTorch from_dlpack throws the following error message: ``` RuntimeError: from_dlpack received an invalid capsule. Note that DLTensor capsules can be consumed only once, so you might have already constructed a...

I also agree that the branch at line 375 is the cause of the confusion: the lack of `__dlpack_device__` is misinterpreted as the legacy interface where the object ought to...

> how to deal with this problem? As a first instance, `from_dlpack` should not be called on an object that does not implement DLPack protocol. This issue is about raising...