Simulation unstable when using triclinic updates in MTTK integrators
I was testing out some simulations with GPUMD and I noticed that the when using the tri option in the npt_mttk ensemble, the atoms breaks apart. This only occurs for the tri option and not the iso or aniso options to update cell vectors. I initially noticed this for a model I had trained, but I also tested this with one of the models in nep-data gilab repo. I specifically used the model for CsPbBr3 there and the same model.xyz there, with the following input:
potential nep.txt
# The equilibration stage
ensemble npt_mttk temp 0.10 0.10 tri 0 0
time_step 1
#每10000步输出一个数据
dump_thermo 100
dump_position 100
run 10000
The cell parameter changes indicating large volume change in the simulation is shown below:
For comparison, the following is obtained when I use
aniso instead of tri:
I guess it is becaue the MTTK barostat is not very stable.
So you want to control all the 6 degrees of freedom. In this case you can try to use the more stable Berendsen or SCR method. For example, using the SCR method, you can write
ensemble npt_scr 0.10 0.10 100 0 0 0 0 0 0 100 100 100 100 100 100 1000
Also, it is a good practice to write velocity in the beginning to set up the initial temperature. In your example where the target temperatue is 0.1, it is better to add
velocity 0.1
With the above considerations, you will have:
potential nep.txt
velocity 0.1
# The equilibration stage
ensemble npt_scr 0.10 0.10 100 0 0 0 0 0 0 100 100 100 100 100 100 1000
time_step 1
#每10000步输出一个数据
dump_thermo 100
dump_position 100
run 10000
Thanks for the suggestion. Yes, I am currently using the SCR barostat which seems to be working fine. I had some previously done simulations with LAMMPS which uses the MTTK barostat, which is why I tried this. Also, I was thinking the MTTK integrator is supposed to sample NPT ensemble more reliably. But it seems the SCR barostat is also fine? I am unsure of what differences I should expect. Thanks anyways.
The reason I posted this as an issue is because it seems that this issue arises very fast (within 100 timesteps), so I thought it might be due to some implementation issue. Also, it seemed that the MTTK barostat with the tri option was working fine for a different system in LAMMPS (but using a DeepMD potential). But when I trained an NEP model for that system it was leading to the above issue in GPUMD (only with the tri option).
-
SCR can sample the NPT ensemble properly according to this paper: 10.1063/5.0020514
-
It could well be that the implementation of the
trioption for MTTK in GPUMD has problems. Could you check if your NEP model can run properly in LAMMPS with thetrioption? Here is the LAMMPS interface for NEP in case you have not seen it yet: https://github.com/brucefan1983/NEP_CPU
Oh I didn't know there was a LAMMPS implementation. Thanks for the info. I will try compiling that when I get time and will report the results here.
I will use the SCR barostat for my NEP simulations. Thanks for the pointer.
Due to certain limitations of GPUMD code, we cannot change an orthogonal box to a tri box during a simulation. If you want to use tri MTTK, you need to change the box to tri before simulation. For example, you can add 0.0001 to an non-diagonal element in model.xyz to make it beome tri.
Thanks. This was indeed the issue and it got solved by starting from an initial triclinic cell. It would be nice to give a warning in this case, where the cell is orthogonal and the required simulation requires a tri box. Otherwise, feel free to close the issue.
Thanks for your explanation @psn417
@rashidrafeek I will leave it here for some time. Perhaps we can make this part more robust soon.
fixed in #854