GridapPETSc.jl
GridapPETSc.jl copied to clipboard
Concurrent subdomain-wise assembly?
Sorry for these very naive questions, I don't know the inner workings of Gridap so they may be inappropriate.
- When you do assembly in parallel, do you use ghost elements, or do you use partial assembly and then sum contributions at the interfaces?
- I'm guessing there is some underlying data decomposition of the initial mesh depending on the number of MPI processes. Is it possible to assemble a different variational formulation just on the "local" portion of the initial mesh?
Should I ask this question some place else @fverdugo? (sorry to ping you randomly, it seems you are the latest contributor to this repo)
Is this interface still maintained @principejavier @amartinhuertas?
Hi @prj- ! Thanks for your interest in this repo. Apologies. I missed this issue. The interface is still maintained. To ask questions regarding the gridap ecosystem better to use the gitter channel or the slack workspace. We tend to be more responsive there.
I will answer your questions in a later post, but in the meantime you could take a look at the gridapdistributed repo and associated Joss paper as these give you an overall picture of how distributed memory computations are handled in the gridap ecosystem.
Thank you very much! Excellent, I see you are using ghost cells in the JOSS paper. Is there an easy way to assemble a local matrix on the local cells + ghost cells? Kind of like what you do for BDDC, but with overlap. My end goal would be to add an interface to PCHPDDM, and for that, I need both a local Mat
and an IS
which maps the local numbering to the global numbering.
Is there an easy way to assemble a local matrix on the local cells + ghost cells?
Yes. The following psecudocode (did not execute it, it may contain typos) builds the local_matrices
instance of
type XXXPData
(see PartitionedArrays.jl
package for more details) with the local matrices being assembled on each
process. Note that I am using the sequential
backend (instead of mpi
) just to allow the code to be debugged sequentially using e.g. the Julia debugger builtin in the Julia extension for VSCode.
using Gridap
using GridapDistributed
using PartitionedArrays
partition = (2,2)
prun(sequential,partition) do parts
domain = (0,1,0,1)
mesh_partition = (4,4)
model = CartesianDiscreteModel(parts,domain,mesh_partition)
order = 2
u((x,y)) = (x+y)^order
f(x) = -Δ(u,x)
reffe = ReferenceFE(lagrangian,Float64,order)
V = TestFESpace(model,reffe,dirichlet_tags="boundary")
U = TrialFESpace(u,V)
local_matrices=map_parts(model.models, V.spaces, U.spaces) do model, U, V
Ω = Triangulation(model)
dΩ = Measure(Ω,2*order)
a(u,v) = ∫( ∇(v)⋅∇(u) )dΩ
l(v) = ∫( v*f )dΩ
op = AffineFEOperator(a,l,U,V)
op.op.matrix
end
end
I need both a local Mat and an IS which maps the local numbering to the global numbering.
In regards to the local to global numbering mapping of DoFs, the gids
member variable of V
(of type DistributedSingleFieldFESpace
) should provide you with the information required. This is a variable of type PRange
(see PartitionedArrays.jl
package and https://github.com/fverdugo/PartitionedArrays.jl/discussions/63 for more details).
Thanks for the detailed explanation. I propose to leave the issue open until I sort everything out, but if you prefer to close it (and then maybe I'll re-open a new one), that's OK.
Ok. We can leave the issue open and discuss all that is needed here. Note that github also offer the Discussions Tab. For future discussions (if any), we can use that tool as well.