Trixi.jl
Trixi.jl copied to clipboard
Notes on parabolic terms
This is basically taken from #1136. Currently, our idea is to merge the DGMulti
setup from that PR into dev
, draft a corresponding TreeMesh
& DGSEM
version in another PR, and see how it looks for Navier-Stokes. When that's okay, we can go ahead and merge dev
into main
to finally get support for parabolic terms in Trixi.jl.
- [ ]
TreeMesh
version withDGSEM
- [x] BR1
- [ ] LDG
- [x] Mortars
- [x] AMR for
TreeMesh
- [x] AMR for
P4estMesh
- [x] 1D
- [x] 2D
- [x] 3D #1239
- [x]
P4estMesh
- [ ]
StructuredMesh
- [x] Navier-Stokes
- [x] Decide on which non-dimensional scaling to use.
- [x] Add a parabolic solver field (or at least a way to specify parabolic numerical parameters)
- [x] Add parabolic penalty terms which depend on
jump(u)
- [ ] Where should penalty terms live: equations, solver, or the elixir itself? These terms will probably need to be specialized for every single equation.
- [x] Add parabolic penalty terms which depend on
- [x] Should we have "default" BC behavior (e.g., "boundary_condition_use_inner_state / boundary_condition_do_nothing")? Answer: no. The user should be required to specify this explicitly.
- [x] Should the viscous flux depend on
orientation
(i.e., computex
andy
components separately)? - [x] Decide how general we want the BCs to be
- What data should be made available to parabolic BCs? Solution values? Gradient values? Viscous flux values?
- Right now, we keep it simple for now, just pass in solution values
- [x] Docstrings
- [x] Documentation, examples, tutorial
- [x] Explain
Gradient
/Divergence
boundary conditions in docs - [ ] Add a Literate tutorial for compressible Navier-Stokes
- [x] Explain
- [ ] Naming things
- [x]
grad_u
(current) vs.gradients
orgradients_u
- [x]
no_boundary_condition
(current) vs.boundary_condition_do_nothing
or something else (what?). Answer:boundary_condition_do_nothing
- [ ] Types for viscous methods like BR1, LDG? Current naming is
ViscousFormulation***
but this gets pretty long.
- [x]
- [ ] Gradient variables
- [ ] Keep track of number of gradient variables and output which gradient variables are used from the parabolic equations constructor.
- [ ] Address potential inefficiencies of converting back and forth the gradient variables (see also here)
- [ ] Avoid need for defining diffusivity through global functions https://github.com/trixi-framework/Trixi.jl/issues/1503
- [x] Add an "enstrophy" callback as a proof of concept issue with "combined" equations. #1239
- [x] Add experimental notes to all exported and/or docstring'd methods/types.
- [x] Rename
elixir_navier_stokes_xxx.jl
toelixir_navierstokes_xxx.jl
for consistency (e.g., withelixir_shallowwater_xxx.jl
). - [ ]
StepsizeCallback
with viscous CFL condition
Regarding the analysis callback, when we call, e.g., https://github.com/trixi-framework/Trixi.jl/blob/db25cc3eaf250d377ffde4f51b9374237f778da8/src/callbacks_step/analysis_dg2d.jl#L198
is the du
here only the rhs
from the hyperbolic solver or would it already contain the sum of hyperbolic and parabolic?
du
is set as
https://github.com/trixi-framework/Trixi.jl/blob/db25cc3eaf250d377ffde4f51b9374237f778da8/src/callbacks_step/analysis.jl#L222
so it should be the combined RHS, I think.
The analysis callback reports the hyperbolic equation as the system being used, e.g.,
Simulation running 'CompressibleEulerEquations2D' with DGSEM(polydeg=3)
This should be specialized.
Hi, what is the status on AMR & Mortars? Is someone currently working on this, or is there still some theoretical work needed to get e.g. the Mortars for parabolic terms right? If it is just for the implementation, I would offer working on this.
As far as I know, nobody is working on that (except maybe someone around you, @jlchan?). The L2 mortars should work fine for parabolic terms - I think the same approach was/is used in FLUXO.
Thus, it would be great if you'd consider working on this - it would greatly increase the applicability of AMR for realistic flow setups 👍
I can only second what @sloede has said - it would be a great contribution!
@DanielDoehring @sloede parabolic mortars are on me and @apey236's list, but we were going to focus on MPI parallelization first. If you are interested in adding mortars for parabolic terms, I would be happy to help!
So I tried to come up with copy-paste based version originating from the mortars for purely hyperbolic equations (see https://github.com/trixi-framework/Trixi.jl/pull/1571).
As spelled out in the conversation of that draft PR https://github.com/trixi-framework/Trixi.jl/pull/1571#discussion_r1265230939 , https://github.com/trixi-framework/Trixi.jl/pull/1571#discussion_r1265233269
I could use some support/understanding in the calc_gradient
function - did you implement this originally @jlchan ?
Hi @DanielDoehring - yes, I implemented this originally in DGMulti
, with @sloede and @ranocha's help for TreeMesh
. Let me take a look at that PR and see if I can respond.
Regarding AMR: My very first steps foreshadow that the splitform of the ODE might complicate things, thus I would prefer working with the standard form first.
Ideally, we could try to get OrdinaryDiffEq
do the work which should be somehow possible - otherwise, if somehow not, we might need to combine rhs!
and rhs!_parabolic
into something like rhs_hyperbolic_parabolic!
ourselves.
Can you explain briefly what makes it hard to use SplitODEProblem with AMR?
No, not really - for this I lack proper understanding of what is going on, which is of course easier if I could only change thing at a time.
What I observe is, however, that after the grid gets changed the boundscheck
https://github.com/trixi-framework/Trixi.jl/blob/375384659cb57a80e253c5e685db8ec298e30d8c/src/solvers/dg.jl#L538-L541
fails when called from
https://github.com/trixi-framework/Trixi.jl/blob/375384659cb57a80e253c5e685db8ec298e30d8c/src/semidiscretization/semidiscretization_hyperbolic_parabolic.jl#L305-L319
which is in turn called from
function (f::SplitFunction)(du, u, p, t)
f.f1(f.cache, u, p, t)
f.f2(du, u, p, t)
du .+= f.cache
end
which resides in SciMLBase.jl/src/scimlfunctions.jl
.
The error will be in the AMR part, it is just a bit hard to track down where they originate since one has to navigate pretty much through the entire SciMLBase
, OrdinaryDiffEq
and DiffEqBase
jungle.
Could you please create an MWE of a SplitODEProblem
, setup an integrator
, and resize!
it? I guess some caches are not resized correctly or something like that.
Could you please create an MWE of a
SplitODEProblem
, setup anintegrator
, andresize!
it? I guess some caches are not resized correctly or something like that.
Yes, that is probably what is going on - currently only the cache
of the hyperbolic part is passed on
https://github.com/trixi-framework/Trixi.jl/blob/b0ec66ea004c84d8487c4318a54933da8c827c92/src/callbacks_step/amr.jl#L186-L193
that needs to be adapted to incorporate (at least) the cache_parabolic
as well when we pass semi
of type SemidiscretizationHyperbolicParabolic
.
Yes, that's the fix we need to make :+1:
Could you please create an MWE of a
SplitODEProblem
, setup anintegrator
, andresize!
it? I guess some caches are not resized correctly or something like that.
MWE is not so easy to provide, but there is certainly something wrong with the resize!
function from OrdinaryDiffEq
for problems with splitted RHS.
While debugging I changed this
https://github.com/trixi-framework/Trixi.jl/blob/67d137d6712f28bbe99ffef3d003afe96c47aade/src/callbacks_step/amr.jl#L177-L180
to
if has_changed
println("Target length u_ode: ", length(u_ode))
println("f.cache pre-change: ", length(integrator.f.cache))
resize!(integrator, length(u_ode))
println("f.cache post-change: ", length(integrator.f.cache))
u_modified!(integrator, true)
end
which outputs
Target length u_ode: 1024
f.cache pre-change: 4096
f.cache post-change: 4096
which is the reason why the boundscheck
https://github.com/trixi-framework/Trixi.jl/blob/375384659cb57a80e253c5e685db8ec298e30d8c/src/solvers/dg.jl#L538-L541
fails for rhs_parabolic!
which is passed as f1
https://github.com/trixi-framework/Trixi.jl/blob/67d137d6712f28bbe99ffef3d003afe96c47aade/src/semidiscretization/semidiscretization_hyperbolic_parabolic.jl#L286
and calls thus rhs_parabolic!
with unmatching dimensions of du_ode
and the contents of the cache
function (f::SplitFunction)(du, u, p, t)
f.f1(f.cache, u, p, t) # Non-resized cache is put into f1!
f.f2(du, u, p, t)
du .+= f.cache
end
For the moment, I avoid the splitfunction entirely by basically replicating the erronous code above in a bug-free manner via
function semidiscretize(semi::SemidiscretizationHyperbolicParabolic, tspan; split_form::Bool=true)
u0_ode = compute_coefficients(first(tspan), semi)
# TODO: MPI, do we want to synchronize loading and print debug statements, e.g. using
# mpi_isparallel() && MPI.Barrier(mpi_comm())
# See https://github.com/trixi-framework/Trixi.jl/issues/328
iip = true # is-inplace, i.e., we modify a vector when calling rhs_parabolic!, rhs!
# Note that the IMEX time integration methods of OrdinaryDiffEq.jl treat the
# first function implicitly and the second one explicitly. Thus, we pass the
# stiffer parabolic function first.
if split_form
return SplitODEProblem{iip}(rhs_parabolic!, rhs!, u0_ode, tspan, semi)
else
specialize = SciMLBase.FullSpecialize # specialize on rhs! and parameters (semi)
return ODEProblem{iip, specialize}(rhs_hyperbolic_parabolic!, u0_ode, tspan, semi)
end
end
function rhs_hyperbolic_parabolic!(du_ode, u_ode, semi::SemidiscretizationHyperbolicParabolic, t)
@trixi_timeit timer() "hyperbolic-parabolic rhs!" begin
du_ode_hyp = similar(du_ode) # TODO: Avoid allocations, make member variable of something?
rhs!(du_ode_hyp, u_ode, semi, t)
rhs_parabolic!(du_ode, u_ode, semi, t)
du_ode .+= du_ode_hyp
end
end
which actually works (currently 2D only).
I will try to report this bug in the OrdinaryDiffEq
package, but it is not so easy to reproduce this since I am not quite sure how to get my hands on an object of ODEIntegrator
(i.e., not a solver/algorithm).
how to get my hands on an object of
ODEIntegrator
(i.e., not a solver/algorithm).
integrator = init(ode, alg; kwargs...)
instead of sol = solve(ode, alg; kwargs...)
Having https://github.com/trixi-framework/Trixi.jl/pull/1629 merged we can tick-off mortars & AMR for TreeMeshes
Having merged https://github.com/trixi-framework/Trixi.jl/pull/1765 AMR for 3D parabolic terms on p4est meshes is now also available.