DFTK.jl
DFTK.jl copied to clipboard
Modified kinetic term
Dear Antoine and Michael,
Here are in a nutshell the modifications I did to DFTK to add a modified kinetic energy term, allowing to have targeted smoothness of the energy band at a little cost of precision on the energies. I have two questions on this PR:
-
- I added a file
src/terms/modified_kinetic_blow_up_function.jlinsrc/termswhich defines a function used insrc/terms/kinetic.jlbut I'm not sure this directory is the best place to put it.
- I added a file
-
- I did an example to show how to compute the variations of total energy vs the volume of the unit cell of silicon. The final plot show big discontinuities for standard kinetic energies and a smooth curve for modified ones which is pretty nice. I'm not sure however that the way I did it is the best way to do so. For now I didn't put much commentary in case the example has to change. Tell me what you think about it !
PS: the paper that led to this implementation (with Eric and Hassan) is to be submitted this week if everything goes to plan.
Bests, Laurent
Hi, just out of curiosity, is this related to the "energy cutoff smearing" of ABINIT (https://docs.abinit.org/variables/rlx/#ecutsm, Appendix B of https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.205147)? I was curious about how useful it is (I haven't tried it - I mostly use QE and QE doesn't have it).
Maybe I should wait a bit for the paper to come out :)
I'll let @LaurentVidal95 answer in more details but yes it's the same. I'd imagine it's useful to plot nice smooth curves, but I don't think it improves accuracy.
Basic idea sound but a bit too much new code for my taste
I tried to merge ModifiedKinetic to Kinetic, tell me if it looks a bit more like it ! I still have to redact a full text for the example and to add references. Also for testing I launch a quick scf computation on silicon LDA with a blow-up function and check convergencebut maybe there's something else to check !
Hi, just out of curiosity, is this related to the "energy cutoff smearing" of ABINIT (https://docs.abinit.org/variables/rlx/#ecutsm, Appendix B of https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.205147)? I was curious about how useful it is (I haven't tried it - I mostly use QE and QE doesn't have it).
Maybe I should wait a bit for the paper to come out :)
Hi @jaemolihm !
As I understand it now the cutoff smearing is helpful when you cannot afford to take a high Ecut. For example in unit-cell relaxation or geometry optimization, the size of your plane-wave basis for a low Ecut varies a lot with the variation of the unit-cell and the energy w.r. to the cell volume display irregularities. The use of an appropriate cutoff smearing ensures regularity of your bands, while introducing a slight over-evaluation of your energies.
Indeed Abinit also features an energy cutoff smearing ! I implemented it in the new version of the PR by the way but its behavior is pretty strange. The blow-up function is really discontinuous if the parameters are not chosen wisely so I might have done something wrong that I don't see for now !
The advantage with the other blow-up function (CHV) is that we have a mathematical proof that it provides C^m regularity, m depending of your parameters. If you are interested the paper is to be submitted this week, I can forward it to you as soon as it is online :+1: !
Thanks for the kind explanation! It sounds like a nice mathematical property, and it's also nice that DFTK allows much flexibility to play with all those details. It would be great if you could forward the paper when it's online. Thanks a lot 😄
Hello everyone, about the last commit I applied the changes you suggested @mfherbst. Tell me if that fits what you meant ! I still have to finish the documentation. The graph in the example is a bit long to generate so I'm planning to just link to a pdf file.
Also, I have removed the Abinit blow-up function because it is equivalent to the CHV one for a blow-up of order 1. So the Abinit one only ensures C^1 regularity, where the CHV with the current hard-coded coefficients gives C^2 regularity. Also I wasn't completely sure about the implementation of the Abinit function with was discontinuous for certain coefficients..
If need I can just switch back to the commit where it was implemented.
Also, I have removed the Abinit blow-up function because it is equivalent to the CHV one for a blow-up of order 1. So the Abinit one only ensures C^1 regularity, where the CHV with the current hard-coded coefficients gives C^2 regularity. Also I wasn't completely sure about the implementation of the Abinit function with was discontinuous for certain coefficients.. If need I can just switch back to the commit where it was implemented.
I don't think it's bad to have the Abinit version as well, unless the cost of your implementation is always the same and always provides better regularity. In any case I would have a comment about this relationship in the code and in the example as people might be more familiar with the Abinit tricks.
Dear all, I changed the structure of the code and added a more precise test and documentation as discussed in person last week ! I think the PR is ready to be reviewed as such. Let me know if that's suits you better !
EDIT: Okay its a strange thing but it seems I'm having problems with revise that only wants to take into account changes when I commited them... This is why I didn't see some problem before this last commit..
Dear all, I don't understand why but the online tests are failing when the same tests on my laptop work fine. Could it be a difference vs julia 1.6 and 1.8 ? I'll keep you inform if I understand what happens.
Not sure 1.6 vs 1.8 can explain, it looks more that your test is very brittle and fails under a different random seed or something?
Not sure 1.6 vs 1.8 can explain, it looks more that your test is very brittle and fails under a different random seed or something?
Okay well I'll just raise the test "x<10" to "x<20" then !
Please don't: it means that it'll break 2 years from now because of an unrelated change, and then I'll have to debug it :-p Make sure your test is consistent physically, ie your "x large" (translated to x >= 10) should probably actually be "x large compared to y" (translated to x >= 10 y)
Please don't: it means that it'll break 2 years from now because of an unrelated change, and then I'll have to debug it :-p Make sure your test is consistent physically, ie your "x large" (translated to x >= 10) should probably actually be "x large compared to y" (translated to x >= 10 y)
Indeed that will be better !
There's another error that occurs just in the mpi test which says that I'm trying to take element 51 of a 50 element table.. It happens when I'm computing the finite difference derivative of the bands with something like [(x[i+1] - x[i])/delta for i in ...].
I'll try to find a fix but if it already happened to you I'll take any advice !
With MPI the bands are distributed across the processors so you have to get everything on one processor (or be clever) if you want to make this check. Just disable the test when running under MPI (see mpi_is_master in the tests)
@LaurentVidal95 I made a few modifications (mostly type stability and I also added a convenience flag to switch between blowups as I think that will be very useful).
One thing I wonder is: Why did you add @inline to all the functions?
@LaurentVidal95 I made a few modifications (mostly type stability and I also added a convenience flag to switch between blowups as I think that will be very useful).
One thing I wonder is: Why did you add
@inlineto all the functions?
Thanks for the modifications !
About the @inline I think I simply missunderstood a previous comment by Antoine ! I don't fully understand what that does so I don't mind removing it.