DFTK.jl
DFTK.jl copied to clipboard
Allowing for fixed Fermi level
Mode to allow fixed Fermi level computations and structure for occupation scheme.
In an older version of the code there was the line
δψk[:, n] .+= ψk[:, m] .* αmn .* (dot(ψk[:, m], δHψ[ik][:, n]) - (n == m) * δεF)
now it's just
δψk[:, n] .+= ψk[:, m] .* αmn .* (dot(ψk[:, m], δHψ[ik][:, n]) * (n != m))
Is that normal? I do not understand how the δεF is now used.
Yeah it's fine, the contribution got moved to delta fn. Just set deps_F to zero in this case
I have a very strange issue with the tests with this commit. In tests/PlaneWaveBasis.jl, the line
@test all(Gplusk_vectors_cart(basis, kpt) .== map(q -> model.recip_lattice * q,
Gplusk_vectors(basis, kpt)))
fails because some value is not zero but 1e-16. Any idea why that would happen?
I checked master and I do not have this issue.
That is very very weird. These two computations are doing literally the same thing, so they should agree completely. And I don't see anything on your PR that could impact this. Wtf.
These two computations are doing literally the same thing
Maybe Julia can be more clever in one case and inline something? But indeed very strange this is not exactly identical. Perhaps just go for an \approx after all ?
The test I added Fixed Fermi level seems very simple. Should I check returned values instead or is it okay to be that broad?
Yes the test is too simple. Perhaps just run exactly at the Fermi level keeping the electron count once and once slightly above or below and see whether you get the expected changes.
isapprox problem seems related to StaticArrays.
MWE so far
using StaticArrays
const Mat3{T} = SMatrix{3, 3, T, 9} where {T}
struct NOK{T <: Real}
recip_lattice::Mat3{T}
fixed_fermi_level::Union{T, Nothing}
end
struct OK{T <: Real}
recip_lattice::Mat3{T}
fixed_fermi_level::Nothing
end
function NOK(lattice::AbstractMatrix{T}) where {T <: Real}
recip_lattice = 2T(π) * inv(lattice')
NOK{T}(recip_lattice, nothing)
end
function OK(lattice::AbstractMatrix{T}) where {T <: Real}
recip_lattice = 2T(π) * inv(lattice')
OK{T}(recip_lattice, nothing)
end
@inline fun_nok(model::NOK, qred) = model.recip_lattice * qred
@inline fun_nok(model::OK, qred) = model.recip_lattice * qred
@noinline fun_ok(model::NOK, qred) = model.recip_lattice * qred
@noinline fun_ok(model::OK, qred) = model.recip_lattice * qred
function test()
lattice = [0.0 5.131570667152971 5.131570667152971;
5.131570667152971 0.0 5.131570667152971;
5.131570667152971 5.131570667152971 0.0]
m_ok = OK(lattice)
m_nok = NOK(lattice)
v = [-2.333333333333333481363069950020872056484222412109375000000000e+00,
-6.666666666666667406815349750104360282421112060546875000000000e-01,
-1.000000000000000000000000000000000000000000000000000000000000e+00]
for fun in (fun_nok, fun_ok), m in (m_nok, m_ok)
@show fun, typeof(m)
# ok
a = fun(m, v)
# nok
b = fun.(Ref(m), [v])[1]
# ok
c = m.recip_lattice * v
display(b - c)
@show a == c
@show b == c
end
end
test()
And putting @noinline before recip_... functions solves it.
Feels like a bug in either julia or static arrays. Can you reduce it even further (maybe without structs at all, just inline and noinline)? Then we can inspect further the generated code.
I don't think I can remove the structures. The problem really happens when adding an union element.
No problem either when I do not cast the recip_lattice field.
And with less type abstract types
using StaticArrays
const Mat3 = SMatrix{3, 3, Float64, 9}
struct NOK
recip_lattice::Mat3
fixed_fermi_level::Union{Float64, Nothing}
end
struct OK
recip_lattice::Mat3
fixed_fermi_level::Nothing
end
struct OK2
recip_lattice
fixed_fermi_level::Union{Float64, Nothing}
end
@inline fun_nok(model::NOK, qred) = model.recip_lattice * qred
@inline fun_nok(model::OK, qred) = model.recip_lattice * qred
@inline fun_nok(model::OK2, qred) = model.recip_lattice * qred
@noinline fun_ok(model::NOK, qred) = model.recip_lattice * qred
@noinline fun_ok(model::OK, qred) = model.recip_lattice * qred
@noinline fun_ok(model::OK2, qred) = model.recip_lattice * qred
function test()
lattice = [0.0 5.131570667152971 5.131570667152971;
5.131570667152971 0.0 5.131570667152971;
5.131570667152971 5.131570667152971 0.0]
recip_lattice = 2π * inv(lattice')
m_ok = OK(recip_lattice, nothing)
m_ok2 = OK2(recip_lattice, nothing)
m_nok = NOK(recip_lattice, nothing)
v = [-2.333333333333333481363069950020872056484222412109375000000000e+00,
-6.666666666666667406815349750104360282421112060546875000000000e-01,
-1.000000000000000000000000000000000000000000000000000000000000e+00]
for fun in (fun_nok, fun_ok), m in (m_nok, m_ok, m_ok2)
@show fun, typeof(m)
# ok
a = fun(m, v)
# nok
b = fun.(Ref(m), [v])[1]
# ok
c = m.recip_lattice * v
display(b - c)
@show a == c
@show b == c
end
end
test()
How to show at lower level how the structure is stored in memory?
Hm not sure. You can always do @code_native on the constructor, but I'm not aware of something that shows the struct layout (unless you can reverse engineer it from sizeof)
See also fieldTAB at the REPL
typeSTUFF don't work. The code_typed is identical, but not the code_native.
OK. The code above is still a bit long, can't you isolate a smaller example (not necessarily exhaustive as you did above, just two behaviors that differ) so we can post it as a julia issue, so that somebody can have a look at it? Also is this consistent across julia versions?
I have started to write the issue, but on Julia 1.8.1 that comes with my computer, I get different values for the two structures (not good at all), and a difference between inlined and not inlined code (alright). With downloaded Julia 1.6.7, I get same value for the two structures, and a difference between inlined and not inlined code. With downloaded Julia 1.8.1, I get no difference between all runs. Same with downloaded 1.8.0.
But the downloaded 1.8.1 shows exactly the same commit as the one on my machine...
Will sleep on this.
I put an assert to make sure n_electrons is an Int.
Should I try in a future PR to factorize direct_minimization and newton?
Not sure how to have type stability for Canonical. Should I force each call to have Canonical{T}?
done on my side
redone for me
re²done; back to what it was at the beginning.
Same interface issue of _do_electrostatics to bypass electrostatics check.
@epolack @antoine-levitt I changed the way the electrostatics check is done to unify the assumptions / exceptions we did in both the canonical and the grand-canonical ensembles. I think it is clearer now what happens and still gives the ability to disable if needed. Please take a look if you agree.
Ah crap, I still had a pending review in my queue... so some of these comments might be outdated, sorry
Small request: please @mfherbst do not rebase next time. I'm using lots of the PR in local for different works, and the rebases make it more difficult for me to update the changes.
I tried to simplify a bit (possibly overwriting things you did previously, sorry if that's the case)