pymor
pymor copied to clipboard
Add support for pydata/sparse arrays in NumpyMatrixOperator
Currently, NumpyMatrixOperator
supports NumPy arrays and SciPy sparse matrices. It would be good to also add support for pydata/sparse arrays, as SciPy did in version 1.4.0.
Not sure how useful that would be. That library only seems to support coo and dok formats. So to use any linear solver, the matrices would have to be converted in the first place. Do you know anyone who uses this library, @pmli?
Not sure how useful that would be. That library only seems to support coo and dok formats. So to use any linear solver, the matrices would have to be converted in the first place.
You're right, scipy.sparse.linalg.spsolve
first converts it to a SciPy sparse matrix.
Do you know anyone who uses this library, @pmli?
I expect we will use it for tensors in quadratic operators (#304).
The documentation does not really sound like this is geared at high-performance numerical linear algebra. Something like pytorch might be more useful when dealing with sparse tensors. (For something like BlockOperator
pydata/sparse would be fine, I assume.)
Using PyTorch for sparse tensors sounds a bit like overkill to me... From the documentation of torch.sparse, it seems PyTorch only supports the COO format. Also, there is a warning that the API might change.
Using PyTorch for sparse tensors sounds a bit like overkill to me...
Why? As far as I understand it pytorch/tensorflow are basically tensor libraries with some support for automatic differentiation. We already have it as an optional dependency.
From the documentation of torch.sparse, it seems PyTorch only supports the COO format.
As does pydata/sparse (along with dok). What sparse format would you have in mind? (We are speaking of higher-order tensors, right?)
In any case, what I was trying to say is that pydata/sparse is probably not the right library for our applications.
Using PyTorch for sparse tensors sounds a bit like overkill to me...
Why? As far as I understand it pytorch/tensorflow are basically tensor libraries with some support for automatic differentiation. We already have it as an optional dependency.
Hm, my impression is that PyTorch and TensorFlow are much bigger, e.g., installing PyTorch takes significantly more time (and memory) than numpy or scipy.
From the documentation of torch.sparse, it seems PyTorch only supports the COO format.
As does pydata/sparse (along with dok). What sparse format would you have in mind? (We are speaking of higher-order tensors, right?)
In any case, what I was trying to say is that pydata/sparse is probably not the right library for our applications.
What I was trying to ask is why do you think torch.sparse
is better if it doesn't support better formats than pydata/sparse (some generalization of CSR/CSC)?
What I was trying to ask is why do you think torch.sparse is better if it doesn't support better formats than pydata/sparse (some generalization of CSR/CSC)?
I am just assuming that the tensors of pytorch are more optimized towards linear algebra operators. My impression was, without really having looked into it, that pydata/sparse is more for 'analysis of sparse data'.
I don't think we want to work on this any time soon, so I tagged it with the 'future' milestone.