NonLocal ECP NaN with Batched Code
Describe the bug Doing a standard workflow for the spin density of neutral bulk aluminum (SCF > NSCF > Convert > J2 opt > J3 opt > DMC). The J2 optimization returns a higher variance/energy ratio ~0.3, but completes without issue. Upon a consecutive J3 optimization the following error results: optJ3.zip nexus_cpu.py.zip
QMCHamiltonian::updateComponent component NonLocalECP returns NaN.
ParticleSet 'e' contains 12 particles : u(6) d(6)
u -8.264482844 -147.1029056 -19.5066447
u 33.76739009 27.75172402 118.3710551
u 1.672580761 55.78669558 37.65452937
u 39.2687636 85.68134103 20.34447848
u 42.37720707 3.460448312 -45.80725706
u -72.5109913 88.43526486 -30.84771678
d 32.46210945 -32.54069576 49.79376128
d 0.1307777896 27.86552943 82.42433044
d 30.18674145 44.99273039 11.11852865
d -49.23514243 -21.45162295 57.21033009
d -2.465547676 -12.37312728 -21.73156034
d -3.049880534 -9.460116645 90.46841502
Distance table for dissimilar particles (A-B):
source: ion0 target: e
Using structure-of-arrays (SoA) data layout
Distance computations use orthorhombic periodic cell in 3D.
Distance table for similar particles (A-A):
source/target: e
Using structure-of-arrays (SoA) data layout
Distance computations use orthorhombic periodic cell in 3D.
Unexpected exception thrown in threaded section
Fatal Error. Aborting at Unhandled Exception```
**To Reproduce**
Ran on Perlmutter with QMCPACK 3.17.9 batched code.
**Expected behavior**
The J3 optimization aborts after a few initial cycles. It should complete after a total of 9 series.
**System:**
Modules loaded:
module unload gpu/1.0
module load cpu/1.0
module load PrgEnv-gnu
module load cray-hdf5-parallel
module load cray-fftw
module unload cray-libsci
module unload darshan
module load cmake
Note: these runs are generated and submitted with Nexus.
This looks like a bug in the J3. It seems unlikely that any numerical or statistical issue would be to blame since the system is so small. Usefully this is a pure MPI CPU run, so we can rule out anything exotic on the computational side. Puzzling that it hasn't shown up for anyone else. It is interesting that some of the electrons have wandered a long way in terms of the primitive cell dimensions. This shouldn't matter, but perhaps it does...
Please can you put the wavefunction file somewhere accessible or give a pointer to your Perlmutter directories and set the permissions appropriately.
The directory has been shared on Perlmutter here: /global/cfs/cdirs/m2113/al_J3
I am running the code on Polaris (QMCPACK 3.17.9 under /soft/applications/qmcpack/develop-20240118/) with legacy drivers, CPU only complex build and also encounter an NaN error during J3 optimization with a similar workflow. The code seems to run without any error when I reduce minwalkers in the first few cycles to 0.01, but this results in large jumps in energy and variance. Please let me know if more information is needed.
Thanks for the report. We have also heard that turning down the meshfactor can trigger the problem in J2 in the original problem. Could also be #4917 or something like it.
This has been sitting around for a month so I wanted to update the status. I have been experimenting with Ilkka’s single atom version. While this appears to have the same problem it could have its own issues due to being so small:
- the problem with bulk Al is straightforward to reproduce within minutes on a CPU system.
- the problem is not related to the plane wave cutoff/spline grids, since these are well converged.
- seemingly a good wavefunction is produced with D+J1+J2 optimization
- however, the one shift optimizer immediately takes a crazy step (very large change in coefficients, 10^9) when J3 is added. This subsequently results in a NaN during pseudopotential evaluation. The abort is therefore correct and not a bug — the problem is related to the optimizer or wavefunction.
- this applies even when large numbers of samples are used for optimization - the optimizer tries to make a bad step.
It is worth noting that J3 is not expected to do very much here, but it still shouldn't go wrong like this. Conservative settings (e.g. increasing minwalkers) seems to only delay the problem.
It has been reported that using different optimizers can avoid the problem, but since they aren't necessarily optimizing the same objective function, they may be bypassing the problem rather than being immune to it.
My suspicions are that:
- J3 may somehow have a bug for this case. How other people have been able to use J3 successfully is a puzzle that would presumably by answered by identifying the bug.
- OneShift needs a better default or more conservative handling for this case for reasons that have yet to be determined.
Will try some larger cells now.
Thank you for the update!
On Wed, Apr 24, 2024 at 11:53 AM Paul R. C. Kent @.***> wrote:
This has been sitting around for a month so I wanted to update the status. I have been experimenting with Ilkka’s single atom version. While this appears to have the same problem it could have its own issues due to being so small:
- the problem with bulk Al is straightforward to reproduce within minutes on a CPU system.
- the problem is not related to the plane wave cutoff/spline grids, since these are well converged.
- seemingly a good wavefunction is produced with D+J1+J2 optimization
- however, the one shift optimizer immediately takes a crazy step (very large change in coefficients, 10^9) when J3 is added. This subsequently results in a NaN during pseudopotential evaluation. The abort is therefore correct and not a bug — the problem is related to the optimizer or wavefunction.
- this applies even when large numbers of samples are used for optimization - the optimizer tries to make a bad step.
It is worth noting that J3 is not expected to do very much here, but it still shouldn't go wrong like this. Conservative settings (e.g. increasing minwalkers) seems to only delay the problem.
It has been reported that using different optimizers can avoid the problem, but since they aren't necessarily optimizing the same objective function, they may be bypassing the problem rather than being immune to it.
My suspicions are that:
- J3 may somehow have a bug for this case. How other people have been able to use J3 successfully is a puzzle that would presumably by answered by identifying the bug.
- OneShift needs a better default or more conservative handling for this case for reasons that have yet to be determined.
Will try some larger cells now.
— Reply to this email directly, view it on GitHub https://github.com/QMCPACK/qmcpack/issues/4941#issuecomment-2075280273, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQQOTDVZ6G65DXUPP27VTHTY67IOVAVCNFSM6AAAAABEYBZQGSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZVGI4DAMRXGM . You are receiving this because you authored the thread.Message ID: @.***>
@prckent For Illka's reproducer, is there a github issue or where can I get the reproducer?
Ilkka's reproducer is a modified version of Annette's. You'll need a working python ase. ilkka.tar.gz
It is worth considering if the 2 up / 1 down electron case is properly handled in J3.
Have you looked at the eigenvalue chosen by the mapping step after the eigenvalue solve? I don't think it gets printed out currently, but it probably should be.
@markdewing could this be https://github.com/QMCPACK/qmcpack/pull/4917 ?
Yes, it could be. The extremely large step is one of the symptoms.
Update: I still see the issue with the latest QMCPACK on NM Al, however, at Gani's suggestion, when using the quartic optimizer I do not see the issue anymore.
Added a new label for ongoing issues w/ batched.
Tagged this for v4. I think we need at least an understanding of what causes this if not a fix. i.e. OK to postpone only if there is a workaround and we have sufficiently shown that the problem is not an underlying bug, more an algorithmic limitation.
Moving to v4.1. ahead of the v4.0 release since we have a workaround and the current understanding is that the problem is an algorithmic weakness and not a bug. Using a different optimizer or conservative settings when adding the J3 seems to bypass the problem. However, we still don't have a diagnosis of how the NaN is arrived at when optimizing J3, so more work is needed since we should --- at worst -- error out with a useful diagnostic message. At best the optimizer would properly adapt to the situation. It is concerning that a small system can display the error.