Pearu Peterson
Pearu Peterson
A couple of quick notes. First, dlpack and binsparse are self-contained specifications where dlpack provides a protocol for sharing strided arrays between different array libraries and binsparse provides a unified...
> While I'm on board with the proposed `__binsparse__` protocol, I'm of the opinion that sparse arrays should have as little user-facing API differences as possible as compared to strided...
> While I agree with a common API; the disadvantage I find in this approach is that since `from_dlpack` requires the producer to pass ownership to the consumer; This is...
PyTorch has sparse formats that binsparse specifications does not specify as pre-defined formats. For instance, there are BSR/BSC (blocked CSR/CSC) formats, hybrid sparse formats (COO/CSR/CSC/BSR/BSC with values being strided tensors),...
CC: @amjames (for torch.sparse)
Recall, `__dlpack__` method returns a capsule that holds a structure that contains all the information about the array: shape, dtype, strides, pointer to storage, etc. This makes previous array interface...
> Libraries may support more formats than are supported by the binsparse protocol, and may need to perform an additional conversion. I am not sure how `__binsparse_format__` will be helpful...
An exception with a helpful exception message is better than a silent expensive conversion, imho.
> A __binsparse__ call, just like a __dlpack__ call, requires the data to be in memory in a supported format Sparse tensors of any format can be modeled as a...
FYI, torch.lopbcg implementation does not support complex inputs. To add the complex support to torch.lopbcg, its algorithm needs to be revised (e.g. use eigh instead of symeig, etc) and add...