Kieran Russell
Kieran Russell
>>> data = np.load('bias_1919998988_217.npy', allow_pickle=True) >>> print(data.dtype) float64 >>> print(data) [[9.61563464e-09 9.64362028e-09 9.31268291e-09 ... 3.88265784e-54 2.15981861e-55 1.15361877e-56] [3.16391082e-08 3.17743468e-08 3.07317995e-08 ... 1.39704753e-53 7.77217898e-55 4.15173548e-56] [1.00031052e-07 1.00602259e-07 9.74606735e-08 ... 4.82597592e-53 2.68509461e-54...
Nothing changes, it is very strange. Clearly the second job is able to access the bias values in the correct datatype to start the simulation and run for 2 million...
Hm ok, thanks for the explanation. I think it is somewhat reproducible in that it happened 3 times earlier, it's a bit late here in the UK but I'll try...
It turns out there were a few problems with my set up, so took some time to get those fixed and repeat things. So far the problem has not returned...
I guess so! Thanks a lot for the help as always! I did have one more question, if you don't mind. I want to implement an asphericity CV using equations...
Hello, so I got round to doing this and I think I implemented it correctly but it runs rather slowly. My implementation goes something like this: (Get components of **T**)...
I was defining a bond between the central Ca of each coarse-grained peptide in the system, so there ends up being quite a few e.g. 2080 in a 64 chain...
I tested out the modification in multistatesampler.py, and with the settings and software on my cluster it did not seem to lead to the correct behavior with just `mpirun python...
Is there any update on this? I'm also looking to use CHARMM36m with OpenMM, a possible workaround is to use the CHARMM/GMX setup tools and use this with OpenMM, but...