Sheehan Olver

Results 570 comments of Sheehan Olver

Thanks for the issue. Don't have time to fix this right now but if you make a PR I can review it

It looks like the issue is that it is assuming the storage type is the same as the array type: https://github.com/JuliaArrays/BlockArrays.jl/blob/abf38ef8a495b87f6ca7b260e636c0346dc64e35/src/blockarray.jl#L180 Probably best to default to `Array` so change this...

`sparse(::BlockArray)` is just calling the default, which spends a lot of time in `findall(x -> x != 0, M)`. It's possible a faster `findall` would accomplish the same feat, otherwise,...

> We should limit the type to BlockArray{Tv,2,SparseMatrixCSC{Tv,Ti}}, don't you agree? No, as other types have nice sparse representations. Calling `sparse(J[Block(i,j)])` should tackle the general case. But it may be...

If it only works with `BlockArray`, then probably src/blockarray.jl, otherwise src.abstractblockarray.jl

I don't believe so, if you are keen to do one I'm sure it will be appreciated.

To be honest it feels unnatural working with block-arrays on the "index level": one would usually try to work directly with blocks. In particular I don't think the generic fallback...

To further add to the noise: also relevant is https://github.com/JuliaMatrices/ArrayLayouts.jl which I use to specify traits for the underlying storage of arrays, giving much more general implementations of multiplication algorithms...

I don't have access to CUDA.jl but I believe the issue is that `GPUArray

I see. I think the solution is to rewrite https://github.com/JuliaArrays/BlockArrays.jl/blob/32990fcdf401a18921bbf4b15dc14c599e466c47/src/blocklinalg.jl#L197 to call the 5-arg `mul!` (which didn't exist when I wrote the code). This would avoid piping through ArrayLayouts.jl. Can...