omega_h
omega_h copied to clipboard
Skip adaption if error is small
As one can see from the following output, the meshes before and after the adaption step are virtually the same. Nevertheless, I assume, that the mesh is updated inside Omega_h. Now, if the mesh changes, I have to reconstruct my preconditioner - even, if the mesh does not really change. And this steps takes a lot of time...
Now my question:
Is it possible to introduce some kind of feedback from Omega_h's adapt
if the mesh has changed "a lot", e.g., a boolean. Then I could use this boolean to skip the recreation of my preconditioner.
@ibaned or do you have a different solution?
limited gradation in 1 steps
generated metrics in 0.0552537 seconds
before adapting:
21326 tets, quality [0.30,1.00], 21326 >0.30
26608 edges, length [0.25,1.55], 2536 <0.71, 24052 in [0.71,1.41], 20 >1.41
after adapting:
21334 tets, quality [0.30,1.00], 21334 >0.30
26624 edges, length [0.25,1.55], 2541 <0.71, 24076 in [0.71,1.41], 7 >1.41
addressing edge lengths took 0.179833 seconds
addressing element qualities took 0.0247778 seconds
adapting took 0.205315 seconds
This is a reasonable idea at a high level... I just don't know how to define "a lot" in preconditioner terms, and even one small adaptation can remove or add a row to the matrix, so I'm not sure what the best procedure is for updating a preconditioner just enough to be compatible with the new mesh...
Well, let me clarify my question: In my simulation code I am doing something like that:
old_mesh # generate initial mesh
new_mesh = old_mesh
old_result # generate initial data
new_result = old_result
while t<t_end: # time loop
t+=1
model.init(new_mesh) # initialize data and solvers, preconditioners, ...
old_result.interpolate(new_result)
new_result = model.solve(old_result) # my pde code
new_mesh = adapt_mesh(old_mesh, new_result) # omega_h
Now, what I want to do is:
old_mesh # generate initial mesh
new_mesh = old_mesh
old_result # generate initial data
new_result = old_result
while t<t_end: # time loop
t+=1
if mesh_change_in_percent > 0.01:
model.init(new_mesh) # initialize data and solvers, preconditioners, ...
old_result.interpolate(new_result)
new_result = model.solve(old_result) # my pde code
new_mesh, mesh_change_in_percent = adapt_mesh(old_mesh, new_result) # omega_h
So, if omega_h
returns some kind of value ('mesh_change_in_percent`), which represents the amount of mesh changes, I can use this value in my higher level code to decide if I want to use the new mesh (leading to a re-initialization of all solver structures) or reuse the old mesh (with reusing the old solver structures). I am not sure which return value makes sense though...
An alternative approach that I use in other code to avoid adapting too often is to have "trigger" quality and length definitions:
AdaptOpts opts(&mesh);
auto trigger_quality = opts.min_quality_desired - 0.02;
auto trigger_length_ratio = opts.max_length_allowed * 0.9);
Then only adapt if the quality or length goes below the trigger:
while (t < t_end) {
t += 1;
// interpolate, solve etc...
set_metric_based_on_result(mesh, new_result);
auto minqual = mesh.min_quality();
auto maxlen = mesh.max_length();
if (minqual < trigger_quality || maxlen > trigger_length_ratio) {
// adapt, re-init, etc...
}
}
Mhh, that sounds interesting!
What is set_metric_based_on_result(mesh, new_result);
?
@bonh that is a placeholder for your own code that computes a metric field based on the current simulation state