ArrayLayouts.jl
ArrayLayouts.jl copied to clipboard
Layouts & storage
Hi @dlfivefifty,
Just opening an issue to have a place to discuss this. Would this package make it easy to deal with different storage backends as well? In CuArrays we have to dispatch to CUBLAS, then there is rocBLAS for AMD GPUs, maybe something like DistributedArrays would want to use SCALAPACK.
(We could consider using ArrayLayouts right away in CuArrays to deal with CUBLAS)
Yes I think so: I think it would be as easy as:
struct AbstractCuColumnMajor <: AbstractColumnMajor end
struct CuDenseColumnMajor <: AbstractCUColumnMajor end
@inline materialize!(M::BlasMatMulVecAdd{<:AbstractCuColumnMajor,<:AbstractCuColumnMajor,<:AbstractCuColumnMajor}) =
CUgemv!('N', M.α, M.A, M.B, M.β, M.C) # or whatebver its called
MemoryLayout(::Type{<:CuArray}) = CUDenseColumnMajor()
ArrayLayouts.@layoutmul CuArray # dispatch to ArrayLayouts
To support subviews of CuArray would just require replicating
https://github.com/JuliaMatrices/ArrayLayouts.jl/blob/e8e314491db8de5b40fa3e397cb845e926047e0c/src/memorylayout.jl#L170
We could make that a macro to simplify the setup for different types.
Great, that looks easy indeed :)
So in CuArrays I was working on a PR to replace most of the ::CuArray signatures with a ::StridedCuArray union, but I might as well immediately use ArrayLayouts.jl instead :D
Pinging @maleadt as well
Yes I would recommend using ArrayLayouts.jl: large union types like StridedArray
cause type inference to be really slow, are not as versatile as ArrayLayouts.jl, and you would need very many overloads to avoid method ambiguities.
Is replicating things the right way to go, or would it be better to regard storage location as orthogonal to layout?
One way is just to call something like this to get the underlying type for any view/permutation/wrapper:
function storage_type(A::AbstractArray)
P = parent(A)
typeof(A) === typeof(P) ? typeof(A) : storage_type(P)
end
storage_type(A) = typeof(A)
Another option would be to have AbstractIncreasingStrides{<:CuArray}
etc, but that's a much bigger change.
Since the current design is working very well I’m inclined to leave it as is. Adding more complexity runs the risk of introducing ambiguity errors, which is exactly what this is designed to avoid.
Perhaps MemoryLayout
is not the right name as the key thing is that it is used to dispatch correctly.
This is not to say there’s not room for improvement, just that the right order to do major changes would be:
- Get things working in the current setup
- Do an experimental PR with the new design
- Test that all dependent packages (BandedMatrices.jl, LazyArrays.jl, etc.) can still work with the new design
@dlfivefifty if we would wish to reduce ambiguity issues (and number of method definitions and macro's...), shouldn't it be possible to extract some of this to LinearAlgebra? I haven't had time to think it through properly, but if we only would have untyped entrypoint-definitions in LinearAlgebra
mul!(C, A, B, a, b) = mul_impl!(mul_type(C, A, B, a, b), C, A, B, a, b)
*(A, B) = times_impl!(times_type(A, B), A, B)
plus a concept of traits, then all of those method definition generating macro's could be potentially be removed? https://github.com/JuliaMatrices/ArrayLayouts.jl/blob/master/src/muladd.jl#L409-L464
Going into LinearAlgevra the long term goal and in fact this package originated as a PR meant for Julia v1.0, but there wasn’t consensus on the design in time.
But your code is oversimplified, you also need to consider *
overload as sometimes the dispatch happens with non-mutable types.
Yes, I can't really oversee the issues right now :p maybe
mul!(C::AbstractVecOrMat, A::AbstractVecOrMat, B::AbstractVecOrMat, a::Number, b::Number) = mul_impl!(mul_type(C, A, B, a, b), C, A, B, a, b)
*(A::AbstractVecOrMat, B::AbstractVecOrMat) = times_impl!(times_type(A, B), A, B)
We would need to implement it in a branch and test to see if it's going to work.
I'm not sure knowing the output type is enough on its own though.
To treat storage types too, does the base version need to call storage_type
so that there's something to hook onto?
times_type(A, B) = compute_times_type(storage_type(A), storage_type(B), MemoryLayout(A), MemoryLayout(B))
and then CuArrays overloads compute_times_type
to return a trait which Base didn't have?
FYI the PR was https://github.com/JuliaLang/julia/pull/25558
it was at one point ready to be merged, though the design has changed a lot since then. In particular it was changed to pipe through the materialize!(::MulAdd)
which cleaned up a lot of the overloads.
Also FYI, https://github.com/FluxML/NNlib.jl/pull/187 (clearest here) now does what I wanted, using this package (including #12 ) and the storage_type
function above, with promote_typejoin
. Then it dispatches things like this (to gemm!('N', 'T',...
)
_batched_mul!(::Array{<:BlasFloat}, C, ::UnitStrideFirst, A, ::UnitStrideFirst, B, ::UnitStride{2}, α, β)
I haven't done it yet but the adaptation for CuArrays
should be simple. Comments would be welcomed.
It also has a fallback which directly checks the strides & returns the correct MemoryLayout
type. This path appears to cost around 0.5μs.