TensorNetwork
TensorNetwork copied to clipboard
lack sqrt operation in tensornetwork/matrixproductstates/infinite_mps.py
the code location: tensornetwork/matrixproductstates/infinite_mps.py line 287 to 298
U, singvals, V, _ = self.backend.svd(
tmp,
pivot_axis=1,
max_singular_values=D,
max_truncation_error=truncation_threshold,
relative=True)
lam = self.backend.diagflat(singvals)
self.tensors[0] = ncon([lam, V, inv_sqrtr, self.tensors[0]],
[[-1, 1], [1, 2], [2, 3], [3, -2, -3]],
backend=self.backend.name)
# absorb connector * inv_sqrtl * U * lam into the right-most tensor
# Note that lam is absorbed here, which means that the state
# is in the parallel decomposition
# Note that we absorb connector_matrix here
self.tensors[-1] = ncon([self.get_tensor(len(self) - 1), inv_sqrtl, U, lam],
[[-1, -2, 1], [1, 2], [2, 3], [3, -3]],
backend=self.backend.name)
The lam
is the singular value after SVD.
When it contracte into the first tensor (i.e. self.tensor[0]
) and the last tensor (i.e. self.tensor[1]
), it should be sqrt(lam)
not lam
to keep the whole MPS invariant. or as the commend said, only absorb connector to left or right tensor.
so the right code is
self.tensors[0] = ncon([self.backend(lam), V, inv_sqrtr, self.tensors[0]],
[[-1, 1], [1, 2], [2, 3], [3, -2, -3]],
self.tensors[-1] = ncon([self.get_tensor(len(self) - 1), inv_sqrtl, U, self.backend(lam)],
[[-1, -2, 1], [1, 2], [2, 3], [3, -3]],
backend=self.backend.name)
so the contraction between self.tensors[-1]
and self.tensors[0]
keep same
new self.tensors[-1]
<-> new self.tensors[0]
=
self.get_tensor(len(self) - 1)
<->inv_sqrtl
<->U
<->self.backend(lam)
<->self.backend(lam)
<->V
<-> inv_sqrtr
<->self.tensors[0]
I am now learning the canonical form of iMPS, in case I make mistake, I also check the invariant:
X,U,S,V,Y = inv_sqrtl, U, lam,V,inv_sqrtr
print(np.einsum('ea,ab,bc,cd,df->ef',X,U,S,V,Y).real)
[[ 1.00000000e+00 -5.85469173e-18 3.96384314e-16 -9.84021892e-16]
[ 1.24032729e-16 1.00000000e+00 -4.77916318e-16 4.44089210e-16]
[ 9.77950360e-16 -6.96925156e-16 1.00000000e+00 -6.01081684e-16]
[ 8.32667268e-16 2.81458884e-16 6.97358837e-16 1.00000000e+00]]
However, if I double multiply S
print(np.einsum('ea,ab,bc,cd,df,fg->eg',X,U,S,S,V,Y).real)
[[ 0.22848829 -0.03708758 -0.0173063 0.06811959]
[-0.05568507 0.33938397 0.05829849 0.00706872]
[-0.04525413 0.09879068 0.303025 0.03182943]
[-0.04647557 0.00931772 -0.00282994 0.39147297]]