ClimaCore.jl
ClimaCore.jl copied to clipboard
GPU support
Purpose
The purpose of the Proposal and how it accomplishes a team/CliMA objective.
Link to any relevant PRs/issues.
- #767
Add GPU support to ClimaCore
Cost/benefits/risks
An analysis of the cost/benefits/risks associated with the proposal.
The key benefit of this proposal is that it would allow us to run code on GPUs, hopefully with minimal changes required to user code.
Producers
A list of the resources and named personnel required to implement the Proposal.
Components
A description of the main components of the software solution.
Inputs
A description of the inputs to the solution (designs, references, equations, closures, discussions etc).
The core ideas is to build a mechanism to describe GPU kernels at a high-level, which can then be used to generate efficient GPU code.
Horizontal kernel
Horizontal kernels are those which include horizontal (spectral element) operations. Internally, we will aim to generate kernels using a similar model as ClimateMachine.jl, which has proven to be very performant.
Threading will use a block for each slab, and a thread for each node inside the slab. As a result, simple broadcasting over a field e.g.
y .= f.(x)
would consist of a simple kernel, using KernelAbstractions.jl (KA from here on) notation:
slabidx = @index(Group, Linear)
i,j = @index(Local, NTuple)
y[slabidx][i,j] = f(x[slabidx][i,j])
Operators (gradient, divergence, curl, interpolation, restriction) will consist of the following operations:
- allocate work arrays
- input array that is local to the block (
@localmem
in KA, or shared memory in CUDA) - output array that is local to the thread ([
@private
in KA],MArray
in CUDA)
- input array that is local to the block (
- copy the input to work array
- synchronization of threads
- computation of the spectral derivatives from the scratch array
- write to output array
For example, a gradient operation
grad = Operators.Gradient()
y .= grad.(f.(x))
would correspond to a kernel such as:
slabidx = @index(Group, Linear)
i,j = @index(Local, NTuple)
# 1) allocate
work_in = @localmem FT (Nq, Nq)
work_out = @private FT (Nq, Nq, 2)
# 2) copy to input work array, concretizing any intermediate steps
work_in[i,j] = f(x[slabidx][i,j])
# 3) syncronize threads
@synchronize()
# 4) spectral derivatives
work_out[i,j,1] = work_out[i,j,2] = 0.0
for k = 1:Nq
work_out[i,j,1] += D[i,k] * work_in[k,j]
work_out[i,j,2] += D[j,k] * work_in[i,k]
end
# 5) write to output array
y[i,j] = Covariant12Vector(work_out[i,j,1], work_out[i,j,2])
Interface
We will continue to make use of the broadcasting syntax for high-level specification, and combined that with the byslab
function. Consider the following expression:
byslab(space) do idx
grad = Operators.gradient()
y[idx] .= grad.(f.(x[idx]))
u[idx] .= g.(y[idx])
end
This is really just a different syntax for the following:
function k(idx)
grad = Operators.gradient()
y[idx] .= grad.(f.(x[idx]))
u[idx] .= g.(y[idx])
end
byslab(k, space)
The basic idea is to transform k
into a kernel function, which will then be launched by byslab
.
We should be able to do this transformation by defining a SlabNodeIndex
type:
struct SlabNodeIndex
slabidx::SlabIndex
nodeidx::Tuple{Int,Int}
end
and defining rules such that for
-
getindex(::Field, ::SlabNodeIndex)
will give a "slab-node index view" -
broadcasted
will propagate this view -
Operators.allocate_work
will allocate the work arrays -
Operators.apply_slab
will:- copy to the node to the input work array
- synchronize
- compute spectral derivatives into the output work array
-
materialize!
will copy the broadcasted result into the output array.
Results and deliverables
A description of the key results, deliverables, quality expectations, and performance metrics.
Task breakdown
Phase 1: shallow water equations
- [x] Unify
Topology2D
andDistributedTopology2D
(#1057) - [x] #1085
- [x] #1073
- [x] #1074
- [x] #1075
- [x] #1076
- [x] #1077
- [x] #1078
- [x] #1079
- [x] #1080
- [x] #1108
- [x] #1081
- [x] #1082
- [x] #1118
- [x] #1083
- [x] #1084
Timeline: end of January 2023
Phase 2: a 3D model
- [ ] Performance evaluation and tuning (e.g. look at performance metrics, tweak kernel configurations)
- [ ] #1388
- [ ] DSS
- [ ] Spectral element operators
- [ ] #1164
- [ ] #1165
- [x] #1122
- [x] #1148
- [x] Tests and support GPU-backed spaces (#1168)
- [x] Finite difference operators
- Initially, we will use a single thread by column (use a loop over levels within the kernel). This should be the simplest (no synchronization is required), and allows dependencies.
- Explore alternatives if necessary (e.g. thread per node)
- [x] Jacobian computation
- Test that our existing approach can be extended.
- [x] Linear solvers
- Adapt existing solvers in ClimaTimeSteppers.jl
- [x] Implicit time stepping
- Test that full piece works
- [ ] #1387
- [x] #1390
- [ ] #1389
Timeline: aim for end of Feb 2023
Phase 3: Multi GPU and other features
- [ ] Limiters
- [ ] Vertical integrals
- [x] MPI GPU support
- Set up Slurm & CI
- Assigning GPUs among processes: see https://github.com/CliMA/ClimateMachine.jl/blob/2def0d653fcef3d069e816844fa4ec5d745c84d1/src/Driver/Driver.jl#L43
- [ ] Single-node multi GPU support
Phase 4: Fix GPU-incompatible physics implementations in ClimaAtmos
- [ ] Gravity wave parameterization: https://github.com/CliMA/ClimaAtmos.jl/pull/852
Timeline: aim for end of May 2023
Reviewers
The names of CliMA personnel who will review the Proposal and subsequent PRs.
Can we apply operators by element
or even a group of elements, instead of by slab
. Using one block per each slab would result in about 16 to 25 threads per slab (our typical use case is a degree of 3 to 4). Typical recommendation is to use a minimum of 128 threads per block or more. A count of 16, for example would be half a warp (NVIDIA) or a quarter wavefront (AMD)! Using at least an element would help reach this number with about 8 levels!
@simonbyrne is this still a draft or a we fully ready here?
I guess it's ready?