ArrayInterface.jl
ArrayInterface.jl copied to clipboard
[Feature request] `compatible_multiplicative_operand` interface
In the discussion under this PR: https://github.com/SciML/ExponentialUtilities.jl/pull/84 . We meet a using case of multipling a linear operator of certain type to a dense matrix. We think it would be fantastic if there is an API called compatible_multiplicative_operand(target, source) in ArrayInterface. That is,
compatible_multiplicative_operand(::CuArray, dense::Matrix) = CuArray(dense)compatible_multiplicative_operand(::Matrix, cuarray::CuArray) = Matrix(cuarray)compatible_multiplicative_operand(::SparseMatrixCSC, dense::Matrix) = densecompatible_multiplicative_operand(::Matrix, sparse::SparseMatrixCSC) = sparse
Similarly, we can define them for Transpose, Adjoint, SubArray, and ReshapedArray, DistributedArray et al.
The key point is, we want two arrays can have compatible array types for multiplication, which is quite different from Base.convert.
What is dense. It looks to be an instance in some of those and maybe a global variable in others?
Ah, some typos, just fixed.
So the goal is to have a more structured hook for converting the source matrix?
So the goal is to have a more structured hook for converting the source matrix?
Exactly.
I definitely see the utility in this and it would be good help with this. I'm not completely sold on the solution because it's an all in one allocation and trait method for this one specific application.
This seems like a place where the overall array interface is lacking in some aspect so it would be great to have some more generic stuff going on that builds to that last step. For example, we could have a trait/constructor that corresponds to each matrix combined and the result is passed to an allocating function. Otherwise it seems like we are headed towards developing many independent alternative solutions to similar for other methods.
These are are just my initial thoughts as someone whose usually interested in the most easily maintained and generalizable solution. There's a lot of set up that goes into composing high performance array operations. For example, TriangularSolve.jl and Octavian.jl put together a lot of information before allocating any new data. Can we have a more structured approach to allocation that works in other high performance situations?
Is there a reason to not use ArrayInterface.restructure here?
He mentions that doesn't solve the problem here. Elsewhere it was said that it's preferred to be a dense array and if all that's needed is a way to ensure that source is a dense matrix with the same backend as target then we could have a really simple solution; but the last example goes to a sparse matrix so I assumed there was something more complex going on here.
To make it more general purposed, what about adapt(*, target, source) for creating a multiplicative compatible array and adapt(target, source) to match the array type better? ArrayInterface.restructure is not useful also because it forces the source array having the same shape as the target array, which is not wanted here.
In somewhere else, like OMEinsum, I also have a similar concern like adapting a scalar to GPU array: https://github.com/under-Peter/OMEinsum.jl/blob/be9b3c67148f20c36dd3b5acd299f8685954fd1a/src/cueinsum.jl#L3 . So I am pretty sure this adapt function is general purposed.
Yeah that was the next question, are you sure this isn't just adapt with parameterless_type?
Yeah that was the next question, are you sure this isn't just
adaptwithparameterless_type?
Can you elaborate on parameterless_type?
Adapt.adapt(parameterless_type(x),y)
https://github.com/SciML/SciMLBase.jl/blob/38c31b6fdea140e54697e25cb46bb3ad9eeb65c7/src/utils.jl#L75-L77
Base.@pure __parameterless_type(T) = typename(T).wrapper
parameterless_type(x) = parameterless_type(typeof(x))
parameterless_type(x::Type) = __parameterless_type(x)
Ah, you meant type casting like convert, which is different from what I am proposing now. Imagine you have a sparse target array, and you want to multiply it on a source array. You do not want to cast the type of the source array to the same type as the target array, because sparse-dense multiplication is more efficient than sparse-sparse multiplication.
I see. Okay, cool.