ITensors.jl
ITensors.jl copied to clipboard
[NDTensors] [BUG] Iterating over GradedUnitRange yields unexpected type change
using NDTensors.Sectors: U1
using NDTensors.GradedAxes: gradedrange
g1 = gradedrange([U1(0) => 2, U1(1)=>1])
for s in g1
println(s, " ", typeof(s))
end
outputs
NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]) NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}
2 Int64
3 Int64
with a change in iterator type beyond the first element. One obtains the same result with s in eachindex(g1)
(I guess they are the same internally)
I dug into this a bit. Best I can tell, the iteration in BlockArrays
(which GradedUnitRange
is implemented through) internally calls to a function inc
which produces the next value in the iteration by adding the integer 1, like this:
inc(state::Tuple{Integer}, start::Tuple{Integer}, stop::Tuple{Integer}) = (state[1]+1,)
(the above is line 411 of BlockArrays/src/blockindices.jl
).
Then for a labeled integer, adding +1
returns a regular integer. So I think that is the origin of the current behavior.
(Just giving some background so we can consider the next step to take / fix.)
This will be fixed in a rewrite I will do of GradedAxes which will account for the new BlockArrays.jl v1.0 release.
It was easy enough to fix in #1468 without rewriting the GradedUnitRange type for now.