libmesh icon indicating copy to clipboard operation
libmesh copied to clipboard

subdivision mesh: solution on boundaries is not saved

Open bzjan opened this issue 5 years ago • 17 comments

I would like to save the solution vector on a subdivision mesh to vtk files. However, I noticed that the solution on all boundary nodes is set to zero, regardless of their previous values. This leads to an incorrect solution in the vicinity of the boundaries due to interpolation.

Investigating this issue I tracked the problem down to build_parallel_solution_vector() (same is true for build_solution_vector()). When executed it returns the solution, but with all boundary nodes set to zero.

I am currently looking at the code of build_parallel solution_vector(), so that it returns the boundary values. Is there actually a way to do this?

I noticed that this problem does not occur without subdivision.

I attached a minimal example code that reproduces the problem. In the image all nodes should be -1, however, the boundary nodes are 0. boundary_values_problem test.zip

bzjan avatar Feb 24 '20 16:02 bzjan

I would say this is probably a bug in build_parallel_solution_vector() which is specific to the FESubdivision elements. It's likely we've just never seen it since we don't have much (any?) regression testing for these elements. The FESubdivision elements were contributed by some external developers who presumably got them working for their specific needs but have not subsequently maintained/updated them much in recent years...

jwpeterson avatar Feb 24 '20 23:02 jwpeterson

I was using the FESubdivision elements because of their C1 smoothness.

What would be better: Use other C1 smooth elements (are there any?) or track down the bug in the FESubdivision elements (What node properties do you think could I check?) ?

bzjan avatar Feb 25 '20 02:02 bzjan

The HERMITE elements are C1, but are limited to basically cartesian meshes (xi/eta/zeta need to be parallel to x/y/z, though you can stretch in any of those directions and/or do adaptive mesh refinement with hanging nodes). The CLOUGH (Clough-Tocher) elements are C1 and work on arbitrary triangulations, but only in 2D (only in the xy plane, even, IIRC). At one point I was considering adding a C1 macroelement tetrahedron, but that was ten years ago; maybe in another ten. The RATIONAL_BERNSTEIN elements can be made C1, but the support there is brand new; I'd love to have more eyes looking at it but I can't yet recommend relying on it.

roystgnr avatar Feb 25 '20 04:02 roystgnr

Thank you very much for the summary! Unfortunately I am working on curved 2d surfaces in 3d. Looks like I will need to track down the bug in build_parallel_solution_vector()

bzjan avatar Feb 25 '20 04:02 bzjan

From what I found it seems that the halo/ghost elements have empty dof_indices vectors.

So in the following lines: elem_soln.resize(dof_indices.size()); for (std::size_t i=0; i<dof_indices.size(); i++) elem_soln[i] = sys_soln(dof_indices[i]); libMesh::FEInterface::nodal_soln (dim,fe_type,elem,elem_soln,nodal_soln); the solution at the nodes is zero.

I see two possible ways forward:

  1. Add dofs to halo cells
  2. in case of subdivision elements, if a cell is a halo cell, copy their node values from the solution vector (where they already exist) to the solution vector currently being built.

Which option is better?

bzjan avatar Feb 26 '20 04:02 bzjan

Hmm... just so that I understand, what is dof_indices.size() here? And are you saying that elem_soln[i] is non-zero prior to calling FEInterface::nodal_soln() but then nodal_soln is zero afterward? Is there any chance you have a small test code that demonstrates the issue that we could try out?

jwpeterson avatar Feb 26 '20 18:02 jwpeterson

Attached is my test code: debug_build_parallel_solution_vector.zip. It outputs the nodal solutions of all elements first and later prints all nodes that have differing values for the system.solution vector and the rebuilt solution vector.

bzjan avatar Feb 26 '20 22:02 bzjan

Realizing, that I did not answer your questions:

  1. dof_indices.size() is zero.
  2. So elem_soln.size() is zero as well
  3. Since the loop for (std::size_t i=0; i<dof_indices.size(); i++) has zero iterations, the nodal_soln is zero

bzjan avatar Mar 02 '20 21:03 bzjan

I found two different solutions for my problem.

  1. If you don't want to save the values of the ghost nodes and only look for a correctly interpolated solution vector, then you need to make sure to exclude ghost/halo elements, when adding to the parallel_soln and repeat_count vector.

This check can be realized by inserting this code:

const libMesh::Tri3Subdivision* subdiv_elem = static_cast<const libMesh::Tri3Subdivision*>(elem);
if (!subdiv_elem->is_ghost())

before the line for (unsigned int n=0; n<elem->n_nodes(); n++)

  1. If you also want to save the values at the ghost nodes, you need to write your own version of build_parallel_solution_vector and modify write_equation_systems for your output class. To retain the possibility to save only selected systems, one can replace the section for interpolating the solution with:
for(const auto & local_node_ptr : _mesh.local_node_ptr_range()){ 
	for(unsigned int d=0; d < n_vec_dim; d++){
		// For vector-valued elements, all components are in nodal_soln. For each node, the components are stored in order, i.e. node_0 -> s0_x, s0_y, s0_z
		parallel_soln.set(nv*local_node_ptr->id() + (var_inc+d + var_num), sys_soln.el(local_node_ptr->dof_number(system.number(),var,0)));			// here solution gets assigned
	}
} // end loop over nodes

I have a feeling there might be an easier solution, though.

bzjan avatar Mar 29 '20 22:03 bzjan

Hello bzjan, looking at miscellanous/example11, where the subdivision element capability is demonstrated, it seems that the authors used a penalty method to enforce boundary conditions. It looks like that they did not add strongly enforced Dirichlet constraints (directly setting dofs). Is penalty enforcement of bcs an options for you ?

vikramvgarg avatar Mar 29 '20 23:03 vikramvgarg

@vikramvgarg I am familiar with the example you mention. Are you suggesting, that instead of working with ghost cells for boundary conditions it would be better to use the penalty method?

From what I understand it is not possible to use subdivision element meshes without ghost cells.

Thinking more about it: example 11 has the same problem I encountered. However, it goes unnoticed b/c the boundaries are set to 0. So when you save the solution, everything seems fine. If the boundary values were unequal zero, the problem would become obvious.

bzjan avatar Mar 30 '20 00:03 bzjan

@bzjan, this should be easy to test with example 11. If you replace 0 with some small value in the lines, const dof_id_type u_dof = nodes[n]->dof_number (system.number(), u_var, 0); const dof_id_type v_dof = nodes[n]->dof_number (system.number(), v_var, 0); const dof_id_type w_dof = nodes[n]->dof_number (system.number(), w_var, 0); do you still see the zeroing out at the boundary ?

IIUC, ghost nodes are a part of the subdivision element method. But the numerical treatment of these nodes differs depending on the method being used to enforce the boundary conditions. My understanding might be incorrect though.

vikramvgarg avatar Mar 30 '20 01:03 vikramvgarg

@vikramvgarg How would I set the the boundary value to something else but zero? The 0 in your quoted lines is the component index, right?

bzjan avatar Mar 30 '20 03:03 bzjan

Oops, my bad. Yes those lines are returning dof_id_type, not setting the value of the boundary condition. I somehow assumed that this was doing the same thing we do while using the penalty method elsewhere in the library. Please ignore that suggestion.

The line: system.matrix->add (u_dof, u_dof, penalty); modifies the local stiffness matrix and puts a large penalty on the diagonal entry associated with u_dof.

To set the boundary value to something other than zero, one would have to set the corresponding entry on the loading vector to penalty \times desired_value. Along the lines of system.rhs->set(u_dof, penalty \times desired_value).

vikramvgarg avatar Mar 30 '20 04:03 vikramvgarg

Hi @bzjan ,have you fixed this issue?. I got similar results in a very simple linear elastic problem. The dirichlet boundary values are output as zero even when they are set unequal to zero. image

bylore avatar Apr 26 '21 02:04 bylore

@bylore: Yes, I fixed it by modifying the vtk output routine and using a custom build_parallel_solution_vector(). I can look for the code in the next few days.

bzjan avatar Apr 26 '21 06:04 bzjan

@bylore: Yes, I fixed it by modifying the vtk output routine and using a custom build_parallel_solution_vector(). I can look for the code in the next few days.

Many thanks for your help!

bylore avatar Apr 26 '21 09:04 bylore