How to save sample configurations?
Hi QMCPACK developers,
In my calculations, I need access to the position of electrons of all the samples. I am wondering how to do this when using QMCPACK with Nexus.
Thanks, Lizhu
See if https://qmcpack.readthedocs.io/en/develop/methods.html#walker-data-logging will do what you need. The logging capability will save walker data from a VMC or DMC run and has various options to control the amount of data.
Thanks! This is really helpful! But I have another issue...
When I tried to set step_period to values larger than 1 as:
<walkerlogs particle="yes" step_period="10"/>
<qmc method="vmc" move="pbyp" gpu="yes" checkpoint="20">
<estimator name="LocalEnergy" hdf5="no"/>
<parameter name="walkers"> 1 </parameter>
<parameter name="samplesperthread"> 64 </parameter>
<parameter name="stepsbetweensamples"> 1 </parameter>
<parameter name="substeps"> 5 </parameter>
<parameter name="warmupSteps"> 100 </parameter>
<parameter name="blocks"> 1 </parameter>
<parameter name="timestep"> 1.0 </parameter>
<parameter name="usedrift"> no </parameter>
</qmc>
<qmc method="dmc" move="pbyp" checkpoint="-1" gpu="yes">
<estimator name="LocalEnergy" hdf5="no"/>
<parameter name="minimumtargetwalkers"> 128 </parameter>
<parameter name="reconfiguration"> no </parameter>
<parameter name="warmupSteps"> 100 </parameter>
<parameter name="timestep"> 0.005 </parameter>
<parameter name="steps"> 10 </parameter>
<parameter name="blocks"> 200 </parameter>
<parameter name="nonlocalmoves"> yes </parameter>
</qmc>
I got the following error:
WalkerLogManager::checkCollectors log buffer widths of collectors do not match
contiguous write is impossible
this was first caused by collectors contributing array logs from identical, but differently named, particlesets such as e, e2, e3 ... (fixed)
please check the WalkerLogManager summaries printed above.
I am wondering what parameter should I change so that I can dump coordinate every N steps.
Thanks, Lizhu
Thanks for reporting. This is expected to work, so I suspect that there is an incompatibility with some of the parameters that you have in your VMC block. My guess is that there is an issue with the logic around samplesperthread. Once our QMC summer school is fully underway we'll be able to investigate the problem. Try removing walkers and samplesperthread, add step count, and verify you are using the batched drivers. You could also try adding logging to the example in https://github.com/QMCPACK/qmc_summer_school_2025/tree/main/session1_introduction/H2O_example .
Thanks! In my input I am using the legacy driver and after changing to batched driver the logger can now save data every N > 1 steps. However, I also notice that the computation time increases by ~3x with the same input.
Here is my timing for the legacy driver:
Timer Inclusive_time Exclusive_time Calls Time_per_call
Total 18.1136 0.0002 1 18.113593000
Startup 0.1321 0.1321 1 0.132111000
VMC 17.9813 17.9813 1 17.981289000
Here is my timing for the batched driver:
Timer Inclusive_time Exclusive_time Calls Time_per_call
Total 43.2980 0.0003 1 43.298001000
Startup 0.1283 0.1283 1 0.128343000
VMCBatched 43.1693 43.1693 1 43.169319000
Here is my VMC section of the input:
<qmc method="vmc" move="pbyp">
<parameter name="warmupSteps" > 2000 </parameter>
<parameter name="blocks" > 800 </parameter>
<parameter name="steps" > 100 </parameter>
<parameter name="subSteps" > 2 </parameter>
<parameter name="timestep" > 0.4 </parameter>
</qmc>
I am using OpenMP and not MPI. Does the batched driver interpret the input differently than the batched driver? My understanding is that both the legacy driver and batched driver should run the same number of Monte Carlo steps and thus the computation time should be the same.