dolfinx_mpc
dolfinx_mpc copied to clipboard
Handling hanging nodes
Hi @jorgensd,
Very cool work with the mpc. I appreciate the contribution. It's not really an issue, more an open discussion. I was wondering if dolfinx_mpc could be used to handle hanging nodes? It doesn't seem too far from what is currently implemented.
I plan on using t8code to manage mesh refinement within dolfinx. While I know it's possible to create transitional elements to avoid hanging nodes, and there is currently a pull request to implement this in t8code:
https://elib.dlr.de/187499/1/RemovingHangingFacesFromTreeBasedAMR.pdf
I feel like it adds a lot of complexity to the mesh management strategy as it must be done in post-processing.
I was curious about your thoughts on this and whether or not dolfinx_mpc is a good tool to manage hanging nodes.
Thank you,
Francis Lacombe
I think it should be possible to handle. I've thought about this for a while, and insertion of hanging nodes would be something one could do during the refinement process, where a facet splits in 2. One would then have to propagate the information about the facet splits to the MPC constructor, that could handle the appropriate constraint.
As of now, it is not a trivial thing to implement, but a long term goal
Hi, we from the t8code community would welcome an integration with dolphinx. Feel free to contact us with any questions. We also have experience with handling nodes numerically and may be able to help.
Hello @jorgensd,
Thank you for your reply, as I suspected, it doesn't look trivial to accomplish. At the end, the "no hanging node" approach is probably the best answer. Hanging nodes don't add any information to the solution. I believe there is a plan to add mixed topologies mesh (tetra, hex, pyramids) in the 0.10 version of Dolfinx? This could be useful with t8code in transition regions.
Thank you @holke, I am currently trying to familiarize with t8code to see how I could implement a wrapper to import/export meshes between t8code and dolfinx. I will probably contact you in the near future.
Francis Lacombe
A prototype have been made at: https://gist.github.com/jorgensd/89acdc5534f3e01741aa6a3db279ff2d Needs some further fixes for parallel problems.
Hello @jorgensd, thank you, it's very appreciated. I see in the prototype that it's currently not working in parallel due to T-joint issues. Let's say we were to generate a uniform mesh and refine it locally on each process. Correct me if I am wrong, but this should not be a problem as each element is own by the process, including its dofs, so the hanging node should have all the needed info. It's only really a problem when you provide a mesh with hanging nodes to the partitionner.
Hello @jorgensd, thank you, it's very appreciated. I see in the prototype that it's currently not working in parallel due to T-joint issues. Let's say we were to generate a uniform mesh and refine it locally on each process. Correct me if I am wrong, but this should not be a problem as each element is own by the process, including its dofs, so the hanging node should have all the needed info. It's only really a problem when you provide a mesh with hanging nodes to the partitionner.
Currently the refinement process in DOLFINx doesn't handle refinement in such a way, so that has to be made a custom feature.
I might have a work-around for the hanging node in parallel thing, I just haven't found time to test it.
Hello @jorgensd,
I modified a bit the example you made, and it looks like it works well for the shared "middle" nodes, but not on the "corners". The problem is worst when I use lower order polynomials. Maybe it's an artefact produced by the fact that each new element only has a single new true dof.
Here's the code I used,
from mpi4py import MPI
import dolfinx
import numpy as np
import basix.ufl
import ufl
from dolfinx.io import distribute_entity_data
from dolfinx.mesh import (
meshtags_from_entities,
)
from dolfinx.graph import adjacencylist
assert MPI.COMM_WORLD.size == 1, "this does currently not work in parallel due to T-joint issues"
if MPI.COMM_WORLD.rank == 0:
nodes = np.array([
[0.000, 0.000, 0.000],
[0.666, 0.000, 0.000],
[1.333, 0.000, 0.000],
[2.000, 0.000, 0.000],
[0.000, 0.333, 0.000],
[0.666, 0.333, 0.000],
[1.333, 0.333, 0.000],
[2.000, 0.333, 0.000],
[0.000, 0.666, 0.000],
[0.666, 0.666, 0.000],
[1.333, 0.666, 0.000],
[2.000, 0.666, 0.000],
[0.000, 1.000, 0.000],
[0.666, 1.000, 0.000],
[1.333, 1.000, 0.000],
[2.000, 1.000, 0.000],
[1.000, 0.333, 0.000],
[0.666, 0.500, 0.000],
[1.000, 0.666, 0.000],
[1.333, 0.500, 0.000],
[1.000, 0.500, 0.000]])
connectivity = np.array([
[ 0, 1, 4, 5],
[ 1, 2, 5, 6],
[ 2, 3, 6, 7],
[ 4, 5, 8, 9],
[ 6, 7, 10, 11],
[ 8, 9, 12, 13],
[ 9, 10, 13, 14],
[10, 11, 14, 15],
[ 5, 16, 17, 20],
[16, 6, 20, 19],
[17, 20, 9, 18],
[20, 19, 18, 10]])
else:
nodes = np.empty((0, 2), dtype=np.float64)
connectivity = np.empty((0, 4), dtype=np.int64)
domain = ufl.Mesh(basix.ufl.element("Lagrange", "quadrilateral", 1, shape=(nodes.shape[1],)))
mesh = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, connectivity, nodes, domain)
V = dolfinx.fem.functionspace(mesh, ("Lagrange", 5))
import dolfinx_mpc
facets = np.array([
[ 5, 6],
[ 5, 16],
[16, 6],
[ 9, 5],
[ 9, 17],
[17, 5],
[ 9, 10],
[ 9, 18],
[18, 10],
[10, 6],
[10, 19],
[19, 6]],
dtype=np.int64)
mesh.topology.create_connectivity(mesh.topology.dim - 1, mesh.topology.dim)
new_local_entities, new_values = distribute_entity_data(
mesh, mesh.topology.dim - 1, facets, np.array([2, 3, 3, 2, 3, 3, 2, 3, 3, 2, 3, 3], dtype=np.int32)
)
# Create meshtag from local entities
mt = meshtags_from_entities(mesh, mesh.topology.dim - 1, adjacencylist(new_local_entities), new_values)
mpc = dolfinx_mpc.MultiPointConstraint(V)
tol = 1e-10
mpc.create_contact_inelastic_condition(mt, 3, 2, eps2=tol)
mpc.finalize()
u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
a = ufl.inner(ufl.grad(u), ufl.grad(v)) * ufl.dx
x = ufl.SpatialCoordinate(mesh)
L = ufl.inner(x[0] * x[1] ** 2, v) * ufl.dx
facets = dolfinx.mesh.locate_entities_boundary(mesh, mesh.topology.dim - 1, lambda x: np.isclose(x[0], 0))
dofs = dolfinx.fem.locate_dofs_topological(V, mesh.topology.dim - 1, facets)
bc = dolfinx.fem.dirichletbc(0.0, dofs, V)
petsc_options = {
"ksp_type": "preonly",
"pc_type": "lu",
"pc_factor_mat_solver_type": "mumps",
"ksp_error_if_not_converged": True,
}
problem = dolfinx_mpc.LinearProblem(a, L, mpc, bcs=[bc], petsc_options=petsc_options)
uh = problem.solve()
with dolfinx.io.VTXWriter(mesh.comm, "test.bp", [uh]) as vtx:
vtx.write(0.0)
I made a second version using node-based constraints and it seems to work, but only for first-order elements.
# SPDX License identifier: MIT
# Author: Jørgen S. Dokken
from mpi4py import MPI
import dolfinx
import numpy as np
import basix.ufl
import ufl
from dolfinx.io import distribute_entity_data
from dolfinx.mesh import (
meshtags_from_entities,
)
from dolfinx.graph import adjacencylist
assert MPI.COMM_WORLD.size == 1, "this does currently not work in parallel due to T-joint issues"
if MPI.COMM_WORLD.rank == 0:
nodes = np.array([
[0.000, 0.000, 0.000],
[0.666, 0.000, 0.000],
[1.333, 0.000, 0.000],
[2.000, 0.000, 0.000],
[0.000, 0.333, 0.000],
[0.666, 0.333, 0.000],
[1.333, 0.333, 0.000],
[2.000, 0.333, 0.000],
[0.000, 0.666, 0.000],
[0.666, 0.666, 0.000],
[1.333, 0.666, 0.000],
[2.000, 0.666, 0.000],
[0.000, 1.000, 0.000],
[0.666, 1.000, 0.000],
[1.333, 1.000, 0.000],
[2.000, 1.000, 0.000],
[1.000, 0.333, 0.000],
[0.666, 0.500, 0.000],
[1.000, 0.666, 0.000],
[1.333, 0.500, 0.000],
[1.000, 0.500, 0.000]])
connectivity = np.array([
[ 0, 1, 4, 5],
[ 1, 2, 5, 6],
[ 2, 3, 6, 7],
[ 4, 5, 8, 9],
[ 6, 7, 10, 11],
[ 8, 9, 12, 13],
[ 9, 10, 13, 14],
[10, 11, 14, 15],
[ 5, 16, 17, 20],
[16, 6, 20, 19],
[17, 20, 9, 18],
[20, 19, 18, 10]])
else:
nodes = np.empty((0, 2), dtype=np.float64)
connectivity = np.empty((0, 4), dtype=np.int64)
domain = ufl.Mesh(basix.ufl.element("Lagrange", "quadrilateral", 1, shape=(nodes.shape[1],)))
mesh = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, connectivity, nodes, domain)
V = dolfinx.fem.functionspace(mesh, ("Lagrange", 1))
import dolfinx_mpc
facets = np.array([
[ 5, 6],
[ 5, 16],
[16, 6],
[ 9, 5],
[ 9, 17],
[17, 5],
[ 9, 10],
[ 9, 18],
[18, 10],
[10, 6],
[10, 19],
[19, 6]],
dtype=np.int64)
mesh.topology.create_connectivity(mesh.topology.dim - 1, mesh.topology.dim)
slms = [[16, 5, 6],
[17, 9, 5],
[18, 9, 10],
[19, 10, 6]]
mpc = dolfinx_mpc.MultiPointConstraint(V)
for s, m1, m2 in slms:
ps = np.array(nodes[s], dtype=mesh.geometry.x.dtype).tobytes()
pm1 = np.array(nodes[m1], dtype=mesh.geometry.x.dtype).tobytes()
pm2 = np.array(nodes[m2], dtype=mesh.geometry.x.dtype).tobytes()
s_m_c = {ps: {pm1: 0.5, pm2: 0.5}}
mpc.create_general_constraint(s_m_c)
mpc.finalize()
u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
a = ufl.inner(ufl.grad(u), ufl.grad(v)) * ufl.dx
x = ufl.SpatialCoordinate(mesh)
L = ufl.inner(x[0] * x[1] ** 2, v) * ufl.dx
facets = dolfinx.mesh.locate_entities_boundary(mesh, mesh.topology.dim - 1, lambda x: np.isclose(x[0], 0))
dofs = dolfinx.fem.locate_dofs_topological(V, mesh.topology.dim - 1, facets)
bc = dolfinx.fem.dirichletbc(0.0, dofs, V)
petsc_options = {
"ksp_type": "preonly",
"pc_type": "lu",
"pc_factor_mat_solver_type": "mumps",
"ksp_error_if_not_converged": True,
}
problem = dolfinx_mpc.LinearProblem(a, L, mpc, bcs=[bc], petsc_options=petsc_options)
uh = problem.solve()
with dolfinx.io.VTXWriter(mesh.comm, "test.bp", [uh]) as vtx:
vtx.write(0.0)