WarpX icon indicating copy to clipboard operation
WarpX copied to clipboard

RZ PMLs Called Even if not used

Open ax3l opened this issue 8 months ago • 18 comments
trafficstars

Discussed in https://github.com/ECP-WarpX/WarpX/discussions/5665

Originally posted by chiehjen February 13, 2025 Hi:

I tried to do a simulation with mesh refinement with RZ coordinate. However, it shows the following error:

amrex::Abort::0::
ERROR   : PML are not implemented in cylindrical geometry.
            (/home/warpx/src/warpx/Source/FieldSolver/FiniteDifferenceSolver/EvolveBPML.cpp:53)
!!!
SIGABRT

The above happens despite that I specified (full input file attached rz.pdf):

boundary.field_lo = none absorbing_silver_mueller
boundary.field_hi = absorbing_silver_mueller absorbing_silver_mueller

Is it possible to do mesh refinement for RZ coordinate? Thank you in advance.

ax3l avatar Feb 25 '25 00:02 ax3l

@EZoni with a quick look, this could be a regression to our work on the MultiFabRegister #5356 #5334 #5230 ...?

Somehow EvolveBPML gets called even though PMLs are not user-selected.

ax3l avatar Feb 25 '25 00:02 ax3l

I have to look into this in detail but the fact that PMLs are used would be consistent with the fact that the simulation uses MR. MR patches are typically terminated by absorbing layers (PMLs) to prevent the reflection of electromagnetic waves. More details on this can be found in our documentation at https://warpx.readthedocs.io/en/latest/theory/amr.html.

EZoni avatar Feb 25 '25 01:02 EZoni

@chiehjen

To start looking into this, could you share the input file in a simple text format, i.e., not PDF?

EZoni avatar Feb 25 '25 01:02 EZoni

@chiehjen

To start looking into this, could you share the input file in a simple text format, i.e., not PDF?

Please use below (amr.max_level = 1 --> fails; amr.max_level = 0 --> run o.k.):

#################################
####### GENERAL PARAMETERS ######
#################################
max_step = 500
amr.n_cell =  64  512
amr.max_grid_size = 64   # maximum size of each AMReX box, used to decompose the domain
amr.blocking_factor = 32 # minimum size of each AMReX box, used to decompose the domain
geometry.dims = RZ
geometry.prob_lo     =   0.   -10.e-6    # physical domain of r, z.
geometry.prob_hi     =  10.e-6    10.e-6

#### Below are mesh refinement ##########
###########################################
amr.max_level = 1  # Level of mesh, currently only 0 and 1 are supported
##amr.ref_ratio = 4  # this applies the same ratio (as specified) per level for all directions
amr.ref_ratio_vect = 2 4 # this is similar to ref_ratio, but allow different ratio for x,y ; x,y,z ; or r,z
warpx.fine_tag_lo = 0.0e-6  8.0e-6 # x,y; x,y,z; or r,z
warpx.fine_tag_hi = 10.0e-6 9.0e-6
warpx.do_subcycling = 1 # 0 means all t-step are the same at each level (this might violate CFL), 1 allow them to be different. 
##########################################

warpx.n_rz_azimuthal_modes = 2

boundary.field_lo = none absorbing_silver_mueller
boundary.field_hi =absorbing_silver_mueller absorbing_silver_mueller

#################################
############ NUMERICS ###########
#################################
warpx.verbose = 1
warpx.do_dive_cleaning = 0
warpx.use_filter = 1
warpx.filter_npass_each_dir = 0 1
warpx.cfl = 1. # if 1., the time step is set to its CFL limit

# Order of particle shape factors
algo.particle_shape = 3

#################################
# Target Profile
#

#   definitions for target extent and pre-plasma
my_constants.L    = 1.e-6            # [m] length of the target in z
my_constants.Lw = 1.0e-6               # [m] radius or half-thickness
my_constants.off_z = 10.e-6           # [m] offset from left to target in z direction
my_constants.temp = 300.           # temperature in Kelvin
my_constants.mass_al = 13.*938.272+14.*939.565  # al (#_proton + #_neutron) mass in MeV
my_constants.mass_e = 0.511 # e- mass in MeV
my_constants.mass_p = 938.272 # proton mass in MeV
my_constants.MeV_to_kg = 1.79e-30 # 1 MeV=1.79*10**(-30) Kg
my_constants.c = 3.e8 # speed of light in [m/s]
my_constants.al_density = 2.7*1000./(mass_al*MeV_to_kg) # 2.7 is [g/cm^3], convert to # density [m^-3] 
my_constants.e_density = al_density*13. # each Al has 13 e-, here assume all e- have been ionized

my_constants.length_p = 50.e-9 # Jerry: length of the proton layer (placed after al)
particles.species_names = aluminium electrons p qspal qspe qspp dummy_ele dummy_pos

particles.photon_species = qspal qspe qspp

# particle species
aluminium.species_type = aluminium
aluminium.injection_style = NUniformPerCell
aluminium.num_particles_per_cell_each_dim = 8 8 8 # for RZ, the three axis are radius, theta, and z and that the recommended number of particles per theta is at least two times the number of azimuthal modes requested
aluminium.momentum_distribution_type = "maxwell_boltzmann"
aluminium.theta = 8.6e-11*temp/mass_al
# minimum and maximum z position between which particles are initialized
# --> should be set for dense targets limit memory consumption during initialization
aluminium.zmin = 0.
aluminium.zmax = L
aluminium.xmin = -0.5*Lw
aluminium.xmax = 0.5*Lw
## For RZ, there is no difference b/w x and y, so ymin=xmin, ymax=xmax.

aluminium.profile = constant
aluminium.density = al_density

##########QED####################
aluminium.do_qed_quantum_sync = 1
aluminium.qed_quantum_sync_phot_product_species = qspal
#################################

p.species_type = proton
p.injection_style = NUniformPerCell
p.num_particles_per_cell_each_dim = 8 8 8 # 8 8
p.momentum_distribution_type = "maxwell_boltzmann"
p.theta = 8.6e-11*temp/mass_p
# minimum and maximum z position between which particles are initialized
# --> should be set for dense targets limit memory consumption during initialization
p.zmin = L
p.zmax = L+length_p
p.xmin = -0.5*Lw
p.xmax = 0.5*Lw

p.profile = constant
p.density = 0.1*6.e28 # 1/10 of 10^22 /cm^3
##########QED####################
p.do_qed_quantum_sync = 1
p.qed_quantum_sync_phot_product_species = qspp
#################################

electrons.species_type = electron
electrons.injection_style = NUniformPerCell
electrons.num_particles_per_cell_each_dim = 8 8 8 # 8 8
electrons.momentum_distribution_type = "maxwell_boltzmann"
electrons.theta = 8.6e-11*temp/mass_e
# minimum and maximum z position between which particles are initialized
# --> should be set for dense targets limit memory consumption during initialization
electrons.zmin = 0.
electrons.zmax = L+length_p
electrons.xmin = -0.5*Lw
electrons.xmax = 0.5*Lw

electrons.profile = parse_density_function
electrons.density_function(x,y,z) = "(z<=0.)*0.+(z>0.)*(z<=L)*e_density+(z>L)*(z<L+length_p)*e_density+(z>=L+length_p)*0."
##########QED####################
electrons.do_qed_quantum_sync = 1
electrons.qed_quantum_sync_phot_product_species = qspe
#################################

### PRODUCT SPECIES ###
qspal.species_type = "photon"
qspal.injection_style = "none"
qspal.do_qed_breit_wheeler = 1
qspal.qed_breit_wheeler_ele_product_species = dummy_ele
qspal.qed_breit_wheeler_pos_product_species = dummy_pos

qspp.species_type = "photon"
qspp.injection_style = "none"
qspp.do_qed_breit_wheeler = 1
qspp.qed_breit_wheeler_ele_product_species = dummy_ele
qspp.qed_breit_wheeler_pos_product_species = dummy_pos

qspe.species_type = "photon"
qspe.injection_style = "none"
qspe.do_qed_breit_wheeler = 1
qspe.qed_breit_wheeler_ele_product_species = dummy_ele
qspe.qed_breit_wheeler_pos_product_species = dummy_pos


#################################

dummy_ele.species_type = "electron"
dummy_ele.injection_style = "none"

dummy_pos.species_type = "positron"
dummy_pos.injection_style = "none"

#################################

##########QED TABLES####################
qed_bw.chi_min = 0.001
qed_bw.lookup_table_mode = "builtin"
qed_qs.chi_min = 0.001
qed_qs.lookup_table_mode = "builtin"
qed_qs.photon_creation_energy_threshold = 0.0
#################################
# Laser Pulse Profile
my_constants.l_wl = 0.82       # laser wavelength in um
my_constants.I0 = 3.e21/1.e18 # laser intensity in 10^18 W/cm^2
lasers.names        = laser1
laser1.position     = 0. 0. -7.0e-6     # point the laser plane (antenna)
laser1.direction    = 0. 0. 1.          # the plane's (antenna's) normal direction
laser1.polarization = 0. 1. 0.          # the main polarization vector
laser1.a0           = 0.85*sqrt(I0)*l_wl              # maximum amplitude of the laser field [V/m]
laser1.wavelength   = l_wl*1.e-6            # central wavelength of the laser pulse [m]
laser1.profile      = Gaussian
laser1.profile_waist = 2.5e-6            # beam waist (E(w_0)=E_0/e) [m]
laser1.profile_duration = 25.e-15/1.1774 # pulse length (E(tau)=E_0/e; tau=tau_E=FWHM_I/1.17741) [s]
laser1.profile_t_peak = 2.*25.e-15/1.1774 # time until peak intensity reached at the laser plane[s]
laser1.profile_focal_distance = 7.0e-6  # focal distance from the antenna [m]

# Diagnostics
diagnostics.diags_names = diag1 #diag2 diag_p diag_e diag_r diag_ra
diag1.intervals = 1:500:100,200 #20
diag1.diag_type = Full
diag1.fields_to_plot = Er Et Ez rho_aluminium rho_electrons rho_p

chiehjen avatar Feb 25 '25 10:02 chiehjen

@chiehjen

We think the issue is due to the fact that PMLs (which terminate the MR patches) are not implemented with the FDTD solver in RZ geometry: https://github.com/ECP-WarpX/WarpX/blob/6969114e964ddd2ad7b8b2c9c512c474cd340aee/Source/WarpX.cpp#L915-L922

Could you try using the PSATD solver, instead of FDTD, and see if that works out of the box?

If that doesn't work, it probably means that we do not yet support MR in RZ geometry in the electromagnetic case, and we should abort the code with a meaningful error message.

Note that another user had reported issues with MR in RZ geometry in #3716.

EZoni avatar Feb 26 '25 01:02 EZoni

@RemiLehe @dpgrote

Does this sound plausible to you or am I overlooking something?

If that doesn't work, it probably means that we do not yet support MR in RZ geometry in the electromagnetic case, and we should abort the code with a meaningful error message.

EZoni avatar Feb 26 '25 01:02 EZoni

@EZoni

Thanks. I tried with "algo.maxwell_solver = psatd", now the error becomes:

1::Assertion `(!at_least_one_boundary_is_silver_mueller) || (electromagnetic_solver_id == ElectromagneticSolverAlgo::Yee)' failed, file "/home/warpx/warpx/src/Source/WarpX.cpp", line 901, Msg: ERROR : The Silver-Mueller boundary condition can only be used with the Yee solver. !!!

chiehjen avatar Feb 26 '25 08:02 chiehjen

@chiehjen

Thanks for the feedback. Can you try using pml instead of absorbing_silver_mueller?

EZoni avatar Mar 04 '25 01:03 EZoni

I tried pml with "algo.maxwell_solver = psatd" and got the following error:

::Assertion field_boundary_lo[1] != FieldBoundaryType::PML && field_boundary_hi[1] != FieldBoundaryType::PML' failed, file "/home/warpx/warpx/src/Source/WarpX.cpp", line 971, Msg: ERROR : PML are not implemented in RZ geometry along z; please set a different boundary condition using boundary.field_lo and boundary.field_hi. !!! SIGABRT `

If I comment out "algo.maxwell_solver = psatd", then the error becomes:

::Assertion isAnyBoundaryPML() == false || electromagnetic_solver_id == ElectromagneticSolverAlgo::PSATD' failed, file "/home/warpx/warpx/src/Source/WarpX.cpp", line 969, Msg: ERROR : PML are not implemented in RZ geometry with FDTD; please set a different boundary condition using boundary.field_lo and boundary.field_hi. !!! SIGABRT `

chiehjen avatar Mar 04 '25 11:03 chiehjen

Okay, we could then try damped along the z direction, which is specific for the PSATD solver, and see if it makes sense for the physics case that you're studying.

All boundary conditions available are listed here: https://warpx.readthedocs.io/en/latest/usage/parameters.html#domain-boundary-conditions.

Other parameters to tune if the PSATD solver works could be the order of the solver and the number of guard cells, but we can tackle this later if we reach a setup that works in the first place.

EZoni avatar Mar 04 '25 17:03 EZoni

Okay, we could then try damped along the z direction, which is specific for the PSATD solver, and see if it makes sense for the physics case that you're studying.

All boundary conditions available are listed here: https://warpx.readthedocs.io/en/latest/usage/parameters.html#domain-boundary-conditions.

Other parameters to tune if the PSATD solver works could be the order of the solver and the number of guard cells, but we can tackle this later if we reach a setup that works in the first place.

Could you please be more specific? I tried: algo.maxwell_solver = psatd boundary.field_lo = none damped boundary.field_hi = damped damped

and

algo.maxwell_solver = psatd boundary.field_lo = none damped boundary.field_hi = pml damped

Both gives error message as:

1::Assertion electromagnetic_solver_id != ElectromagneticSolverAlgo::PSATD' failed, file "/home/warpx/src/warpx/Source/WarpX.cpp", line 1148, Msg: ERROR : algo.maxwell_solver = psatd is not supported because WarpX was built without spectral solvers

chiehjen avatar Mar 06 '25 08:03 chiehjen

The error message is due to the fact that WarpX was built without FFT support, controlled by the build flag WarpX_FFT.

Can you tell us how you built WarpX? In our documentation we describe several methods to install WarpX:

  • For users on local machines: https://warpx.readthedocs.io/en/latest/install/users.html
  • For developers (more control) on local machines: https://warpx.readthedocs.io/en/latest/install/cmake.html
  • For users or developers on HPC machines: https://warpx.readthedocs.io/en/latest/install/hpc.html

Once we know how you build WarpX for your own workflow, we can make sure that FFT support is included in your build, which should eventually resolve the error above.

EZoni avatar Mar 06 '25 17:03 EZoni

The error message is due to the fact that WarpX was built without FFT support, controlled by the build flag WarpX_FFT.

Can you tell us how you built WarpX? In our documentation we describe several methods to install WarpX:

  • For users on local machines: https://warpx.readthedocs.io/en/latest/install/users.html
  • For developers (more control) on local machines: https://warpx.readthedocs.io/en/latest/install/cmake.html
  • For users or developers on HPC machines: https://warpx.readthedocs.io/en/latest/install/hpc.html

Once we know how you build WarpX for your own workflow, we can make sure that FFT support is included in your build, which should eventually resolve the error above.

Maybe I need to clarity first, that the built is on the GPU version. I did installed the "FFTW3" on the machine. WarpX is installed with the following option:

cmake -S . -B build_cuda -DWarpX_DIMS="1;2;RZ;3" -DWarpX_COMPUTE=CUDA -DWarpX_PYTHON=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_FFT=ON -DCMAKE_CUDA_COMPILER=$(which nvcc)

Note that despite WarpX_FFT=ON, it gives PSATD: OFF:

WarpX build configuration: Version: 24.03 (24.03-19-g29a0a564cd66) C++ Compiler: GNU 11.4.0 /usr/bin/c++

Installation prefix: /usr/local bin: bin lib: lib include: include cmake: lib/cmake/WarpX python: lib/python3.11/site-packages

Build type: Release Build options: APP: ON ASCENT: OFF COMPUTE: CUDA DIMS: 1;2;RZ;3 Embedded Boundary: OFF GPU clock timers: ON IPO/LTO: OFF LIB: ON (static) MPI: ON PARTICLE PRECISION: DOUBLE PRECISION: DOUBLE PSATD: OFF <-------------------------Here !!!! PYTHON: ON PYTHON IPO: ON OPENPMD: ON QED: ON QED table generation: ON QED tools: OFF SENSEI: OFF

The executable generated is: warpx.rz.MPI.CUDA.DP.PDP.OPMD.QED.GENQEDTABLES .

Please tell me how to install with PSATD: On , thank you.

chiehjen avatar Mar 07 '25 14:03 chiehjen

I see that your WarpX version is 24.03.

The FFT build option at that time used to be called WarpX_PSATD, but it was renamed in #4912 later last year.

The current name of the FFT build option is WarpX_FFT (so -DWarpX_FFT=ON in your cmake command), see https://warpx.readthedocs.io/en/latest/install/cmake.html#build-options for reference.

In order to use the new build option, you would need to build a newer version of WarpX.

We just released WarpX 25.03. Would it be possible for you to use our latest version or is there a specific reason why you need to use WarpX 24.03?

EZoni avatar Mar 07 '25 19:03 EZoni

Thanks for the info. I finally manage to install the latest version with FFT.

I tried: algo.maxwell_solver = psatd boundary.field_lo = none damped boundary.field_hi = pml damped

and got the following error:

Initializing AMReX (25.05-19-g1fbc66798140)... MPI initialized with 4 MPI processes MPI initialized with thread support level 3 Initializing CUDA... Multiple GPUs are visible to each MPI rank. This is usually not an issue. But this may lead to incorrect or suboptimal rank-to-GPU mapping.! CUDA initialized with 4 devices. AMReX (25.05-19-g1fbc66798140) initialized 3::Assertion `m_do_subcycling == 0' failed, file "/data/software/WarpX/warpx_nou/Source/WarpX.cpp", line 502 !!! SIGABRT

Could you please help again? Thank you.

chiehjen avatar May 21 '25 10:05 chiehjen

Hi @chiehjen, since this discussion is a few months old, could you repaste here the latest full input deck that you are currently using, to make sure I can reproduce the latest runtime error correctly?

EZoni avatar Jun 04 '25 21:06 EZoni

Hi @chiehjen, since this discussion is a few months old, could you repaste here the latest full input deck that you are currently using, to make sure I can reproduce the latest runtime error correctly?

Yes, please use the attached input.

test_mesh_refine.txt

chiehjen avatar Jun 13 '25 09:06 chiehjen

Thanks, @chiehjen.

I tested your input again and ran into various errors as well.

At this point I think it's quite fair to say that MR in RZ geometry in the electromagnetic case is not fully implemented, as I had conjectured initially in https://github.com/BLAST-WarpX/warpx/issues/5705#issuecomment-2683662749.

@RemiLehe @dpgrote Do you have reasons to believe that this should work out of the box instead?

EZoni avatar Jun 13 '25 23:06 EZoni