ITensors.jl
ITensors.jl copied to clipboard
[ITensors] Port VUMPS functions
This PR adds a few of the functions living in https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/ITensors.jl
The one thing missing at the moment is the more efficient sqrt(T::DiagTensor) which I couldn't get working, but the eigen implementation seems sane at the moment so there may be no need
Looks like there's some assumption being made that for a sliced mps e.g. psi[2:4] returns a vector of itensors instead of a MPS, not sure if we want to adjust that here or in ITensorInfiniteMPS
Re the sliced MPS, seems the assumption is setting/getting a unit range
function deepcopy_ortho_center(M::AbstractMPS)
M = copy(M)
c = ortho_lims(M)
# TODO: define `getindex(::AbstractMPS, I)` to return `AbstractMPS`
M[c] = deepcopy(typeof(M)(M[c]))
return M
end
What are your thoughts given the TODO Matt?
I think slicing an MPS/MPO with a unit range should return an MPS/MPO, if we make that change we can just update other code like the one you quote accordingly.
However, the definition in ITensorInfiniteMPS.jl: https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/fc1078a60442081010447caf65c9a22c7278e4ce/src/ITensors.jl#L186 is too naive, as the comment states, since it doesn't preserve information about the orthogonality center.
Just pushed a hopefully better performance version of sqrt. The original logic is a bit strange to me, it will fail for any matrix that isn't PSD without a large atol, how do you feel about allowing for conversion to complex like Base.sqrt does?
julia> eigvals(array(A,i,i'))
2-element Vector{Float64}:
0.031100580805835387
1.5475689615743486
julia> sqrt(Hermitian(array(A,i,i'))) ≈ array(sqrt(A),i,i')
true
julia> eigvals(array(B,i,i'))
2-element Vector{Float64}:
-0.9688994191941646
0.5475689615743488
jjulia> sqrt(array(B,i,i'))
2×2 Matrix{ComplexF64}:
0.638411+0.135107im 0.254641-0.338726im
0.254641-0.338726im 0.101568+0.84922im
julia> sqrt(B)
ERROR: DomainError with -0.9688994191941646:
sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
[1] throw_complex_domainerror(f::Symbol, x::Float64)
@ Base.Math ./math.jl:33
[2] sqrt
@ ./math.jl:608 [inlined]
...
@mtfishman now that ITensorMPS is out of ITensors I think this PR is complete pending your review and I'll move over there to fix the slice issue
Thanks, I implemented all of those suggested changes. Do you have any suggestions on how to handle the complex conversion? It seems that map_diag won't convert with a type change Float -> Complex
i.e.
A = random_itensor(i,i')
ITensors.NDTensors.map_diag(x-> x+1.0im, A)
crashes
ERROR: InexactError: Float64(-1.4801732244792405 + 1.0im)
Stacktrace:
[1] Real
@ ./complex.jl:44 [inlined]
[2] convert
@ ./number.jl:7 [inlined]
[3] setindex!
@ ./array.jl:987 [inlined]
[4] setindex!
@ ./subarray.jl:384 [inlined]
...
(Theoretically this could be on the user but that seems a bit annoying)
Looks like NDTensors.map_diag needs to be generalized to handle the element type widening.
It's a bit hacky, but one strategy could be be to change NDTensors.map_diag to:
# Should go in `NDTensors.Exposed`, and specialized definitions may need to be added
# for certain GPU backends.
LinearAlgebra.copy_oftype(exposed_t::Exposed, elt::Type) = LinearAlgebra.copy_oftype(unexpose(exposed_t), elt)
function map_diag(f::Function, exposed_t::Exposed)
new_elt = Base._return_type(f, Tuple{eltype(exposed_t)})
t_dest = LinearAlgebra.copy_oftype(exposed_t, new_elt)
map_diag!(f, expose(t_dest), exposed_t)
return t_dest
end
Thanks Matt, it seems that I'm not familiar enough with the NDTensors internals to determine what should be changed. That snippet doesn't seem to work as it gets a call to copyto! which is not implemented
https://github.com/ITensor/ITensors.jl/blob/main/NDTensors/src/tensor/tensor.jl#L207