sparse
sparse copied to clipboard
CSR/CSC Tracking issue
cc @daletovar @ivirshup
- [x] Make attributes immutable, particularly
compressed_axes
. - [ ] Implement optimization changes for
CSR
/CSC
:- [ ] Modify
__array_function__
- [ ] Modify
elemwise
- [ ] Modify
_reduce_*
- [ ] Modify
- [ ] Benchmarks vs. scipy
- [ ] Indexing
- [ ] Broadcasting
- [ ] Reductions
- [ ] Array functions
- [ ] Tests/benchmarks for
- [ ] Indexing
- [ ] Broadcasting
- [ ] Reductions
- [ ] Array functions
It would also be good to figure out which attributes should be immutable, and for which classes this applies.
To me, information that is "part of the type" should be immutable. Now, because there isn't a super strong idea of "type of an array" in the ecosystem, I think this can be fuzzy. E.g. is the dtype
of an array part of it's type? In a sense yes, since it controls what operations are allowed, but we can't isinstance(x, array[int])
.
To me, it makes sense for arrays to be parametric on their number of dimensions and dtype
. For sparse arrays with compressed axes, I think it makes sense for the compressed axis to be part of the type too.
@GenevieveBuckley, @ryan-williams, potentially of interest to you. I think this is a key target for getting good sparse array support in dask
-> having dask
support in anndata
.
It's great to see you working on this @ivirshup
Was there any specific input you wanted from me, or just cc-ing for awareness?
Also, if you want to do some pair programming to work on dask + anndata, I'd be up for that. Let me know if that's something you'd find useful.
To me, it makes sense for arrays to be parametric on their number of dimensions and
dtype
. For sparse arrays with compressed axes, I think it makes sense for the compressed axis to be part of the type too.
+1, I agree with all of this.
I also added the points @ivirshup mentioned.
I was looking to work on this issue, @hameerabbasi @GenevieveBuckley @ivirshup if you dont mind could you tell me a bit more about it since I am new to open source and sparse
@AmPhIbIaN26 Please don't tag everyone personally, it sends out a load of e-mails to people who may not want them
As for the answer to your question, look at our contributing page, and follow the links; read closely. If you have any specific questions, ask in the Gitter chat and not here.
Thanks and sorry for the inconvenience, ill follow up in glitter.
Sorry for the late response on this! I'm quite busy with PhD/ life commitments at the moment but should hopefully have more time to work on this next month.
@GenevieveBuckley, mostly just letting you know! The pair programming could definitely be useful, I'll let you know when I have a chance to dip my toes back into dask.
@hameerabbasi a couple questions:
First, I'm trying to figure out the broadcasting, in particular result types. How does this sound for a promotion hierarchy:
input types | output types |
---|---|
CSR | CSR |
CSC | CSC |
CSR,CSC | CSR |
T<:{CSC,CSR}, COO(1d) | T |
The main ideas here being:
- CSR is preferred over CSC if we have to choose
- This sorta fits the "C-ordered by default" numpy api
- Alternatives include: this becomes
COO
, choose whichever type came first
- 1d COO is treated similarly to dense arrays for finding the result type (no effect). That is, a one dimensional COO array won't cause the output to be COO.
Second, while trying some benchmarks for the stuff in my sparse_wrapper
repo, I noticed sparsity structure can have a large effect on performance. I think it would be important to have a good sample of matrices to use in the benchmarks. There are quite a few collections of sparse matrices, and a number of benchmarking papers in this field I was looking at getting cases from. Do you have any preferences or recommendations here?
Benchmark dataset sources
Sources, strategies
- Performance optimization, modeling and analysis of sparse matrix-matrix products on multi-core and many-core processors
- Performance Evaluation of Sparse Matrix Multiplication Kernels on Intel Xeon Phi
- Performance Portable Sparse Matrix-Matrix Multiplication on Intel Knights Landing and NVIDIA GPUs. (Conference) | OSTI.GOV
Often take from the suite sparse collection:
I now have some time to work on this!
@hameerabbasi, do you think we could have a short call about this? Maybe early next week? In particular I'd like to get a sense of where this sits in the broader vision for the project, especially given the work on taco integration.
@ivirshup I've sent you a message on Gitter, let's chat there. 😄