ArrayFire.jl icon indicating copy to clipboard operation
ArrayFire.jl copied to clipboard

Compatibility with other JuliaGPU packages

Open ranjanan opened this issue 8 years ago • 7 comments

It would be nice if ArrayFire could easily interface with other JuliaGPU packages such as CUBLAS etc. for the in-place BLAS and finer lower-level control.

ranjanan avatar May 09 '16 10:05 ranjanan

Also Gunrock should easily be able to take LightGraphs, etc.

ViralBShah avatar May 09 '16 10:05 ViralBShah

Instead, maybe the way to go would be for ArrayFire to be one set of algorithms in a larger GPULinearAlgebra package that also incorporates sprase methods?

ChrisRackauckas avatar Jun 10 '16 07:06 ChrisRackauckas

Yes ArrayFire could be a backend in a larger package.

ViralBShah avatar Jun 10 '16 14:06 ViralBShah

@ChrisRackauckas I was thinking of supporting AFArray in BandedMatrices.jl. How would that fit into your proposed GPULinearAlgebra package?

dlfivefifty avatar Nov 02 '16 07:11 dlfivefifty

I think the design would be something like, GPULinearAlgebra.jl would be an be a package with abstract types such as AbstractGPUMatrix, and an interface with generic methods as fallbacks (implemented as GPU kernals, probably just calling CUSPARSE, CUBLAS, and ArrayFire). You'd set GPUBandedMatrix <: AbstractGPUMatrix, and then you'd have the fallback methods available, and you'd override *, \, etc. for the special matrix type. There's probably a little bit more too it, but that would get things pretty far.

Would it be better to try and hammer out an generic AbstractMatrix interface for linear algebra first? The goal would be to find a clear enough interface such that one could write algorithms on AbstractMatrix and, if the interface implemented, algorithms written on AbstractMatrix with the linear algebra interface will "just work", whether the matrix is on the CPU or GPU. This tends to be the case for AbstractVectors already, but for AbstractMatrix it's not only informal, but not necessarily documented (is the interface just, implement +, -, *, and the BLAS functions like Ac_mul_B? If so, this should be documented). This might be a proposal for Base for a linear algebra interface here. Then the GPU side just implements standard kernels for, from which you just overload as necessary for specialized algorithms.

ChrisRackauckas avatar Nov 02 '16 08:11 ChrisRackauckas

I think it'd probably be better to start out with an interface for AbstractGPUDenseMatrix because it seems like it would be hard to treat sparse matrices and dense matrices together.

I think GPUBandedMatrix <: AbstractGPUMatrix would be too ambitious and would run into multiple inheritance issues since there'd also be GPUBandedMatrix <: AbstractBandedMatrix. I was planning to change BandedMatrix{T} to the type

immutable BandedMatrix{T,MT<:AbstractMatrix} <: AbstractBandedMatrix{T}
    data::MT   #u+l+1 x num columns matrix
    l::Int
    u::Int
    m::Int # number of rows
end

So that MT could then be any AbstractGPUMatrix, or Matrix, or whatever. Since BandedMatrices.jl uses BLAS calls underneath, hopefully overriding the calls with CUBLAS will just work....

dlfivefifty avatar Nov 02 '16 09:11 dlfivefifty

I would start one step earlier: Define a sensible GPU buffer/array type and operations like broadcast, converts between backends (if possible, no copy), and a solid set of helper functions to make it easy to load different backends. That way we guarantee that the work done is also usable for other packages and not just LinearAlgebra. Things like making it easy to launch a custom kernel should be a crucial component, but are definitely not linear algebra specific. I started working on https://github.com/JuliaGPU/GPUArrays.jl some time ago, which has this goal. It's only slowly moving forward, but if I'm lucky, I will have some more time to put into it in the near future!

SimonDanisch avatar Nov 02 '16 10:11 SimonDanisch