GridapPETSc.jl icon indicating copy to clipboard operation
GridapPETSc.jl copied to clipboard

Concurrent subdomain-wise assembly?

Open prj- opened this issue 2 years ago • 7 comments

Sorry for these very naive questions, I don't know the inner workings of Gridap so they may be inappropriate.

  1. When you do assembly in parallel, do you use ghost elements, or do you use partial assembly and then sum contributions at the interfaces?
  2. I'm guessing there is some underlying data decomposition of the initial mesh depending on the number of MPI processes. Is it possible to assemble a different variational formulation just on the "local" portion of the initial mesh?

prj- avatar Nov 24 '22 13:11 prj-

Should I ask this question some place else @fverdugo? (sorry to ping you randomly, it seems you are the latest contributor to this repo)

prj- avatar Nov 29 '22 12:11 prj-

Is this interface still maintained @principejavier @amartinhuertas?

prj- avatar Dec 06 '22 17:12 prj-

Hi @prj- ! Thanks for your interest in this repo. Apologies. I missed this issue. The interface is still maintained. To ask questions regarding the gridap ecosystem better to use the gitter channel or the slack workspace. We tend to be more responsive there.

I will answer your questions in a later post, but in the meantime you could take a look at the gridapdistributed repo and associated Joss paper as these give you an overall picture of how distributed memory computations are handled in the gridap ecosystem.

amartinhuertas avatar Dec 06 '22 20:12 amartinhuertas

Thank you very much! Excellent, I see you are using ghost cells in the JOSS paper. Is there an easy way to assemble a local matrix on the local cells + ghost cells? Kind of like what you do for BDDC, but with overlap. My end goal would be to add an interface to PCHPDDM, and for that, I need both a local Mat and an IS which maps the local numbering to the global numbering.

prj- avatar Dec 06 '22 21:12 prj-

Is there an easy way to assemble a local matrix on the local cells + ghost cells?

Yes. The following psecudocode (did not execute it, it may contain typos) builds the local_matrices instance of type XXXPData (see PartitionedArrays.jl package for more details) with the local matrices being assembled on each process. Note that I am using the sequential backend (instead of mpi) just to allow the code to be debugged sequentially using e.g. the Julia debugger builtin in the Julia extension for VSCode.

using Gridap
using GridapDistributed
using PartitionedArrays
partition = (2,2)
prun(sequential,partition) do parts
  domain = (0,1,0,1)
  mesh_partition = (4,4)
  model = CartesianDiscreteModel(parts,domain,mesh_partition)
  order = 2
  u((x,y)) = (x+y)^order
  f(x) = -Δ(u,x)
  reffe = ReferenceFE(lagrangian,Float64,order)
  V = TestFESpace(model,reffe,dirichlet_tags="boundary")
  U = TrialFESpace(u,V)
  local_matrices=map_parts(model.models, V.spaces, U.spaces) do model, U, V
    Ω = Triangulation(model)
    dΩ = Measure(Ω,2*order)
    a(u,v) = ∫( ∇(v)⋅∇(u) )dΩ
    l(v) = ∫( v*f )dΩ
    op = AffineFEOperator(a,l,U,V)
    op.op.matrix
  end
end

I need both a local Mat and an IS which maps the local numbering to the global numbering.

In regards to the local to global numbering mapping of DoFs, the gids member variable of V (of type DistributedSingleFieldFESpace) should provide you with the information required. This is a variable of type PRange (see PartitionedArrays.jl package and https://github.com/fverdugo/PartitionedArrays.jl/discussions/63 for more details).

amartinhuertas avatar Dec 07 '22 11:12 amartinhuertas

Thanks for the detailed explanation. I propose to leave the issue open until I sort everything out, but if you prefer to close it (and then maybe I'll re-open a new one), that's OK.

prj- avatar Dec 07 '22 17:12 prj-

Ok. We can leave the issue open and discuss all that is needed here. Note that github also offer the Discussions Tab. For future discussions (if any), we can use that tool as well.

amartinhuertas avatar Dec 07 '22 23:12 amartinhuertas