qutip icon indicating copy to clipboard operation
qutip copied to clipboard

Strange behavior of general_stochastic

Open itayshom opened this issue 4 years ago • 3 comments

Hi,

I'm using general_stochastic to solve a SME which does not take the canonical form, so I can't use smesolve. There is an analytic solution to compare with. I'm pasting my code below. I followed this notebook. Now, the expectation values returned in res.expect are completely wrong (let's say there's one trajectory). Calculating the expectation values 'manually' from the states using e.g. expect(x, res.states[0][-1]) also gives a wrong value. Now it turns out that the returned states are "vectorized", e.g.

>>> res.states[0][-1]
Quantum object: dims = [[32], [32]], shape = (1024, 1), type = oper, isherm = False
...

and furthermore are not normalized (even when using kwarg normalize=True). It seems that expect can't handle this type of object. Also .unit() does not seem to normalize properly.

The only way I got it to give the correct expectation values is to iterate over all the states, use vector_to_operator, and then normalize by dividing by tr(). This give correct results, which shows the SME is formulated properly. However, it is extremely time consuming, and it requires storing all the states, store_states=True which is slow and memory-hungry.

By the way, even vector_to_operator does not work as-is, as mentioned in #1204. It assumes the dims are like [ [[rows], [cols]], [1]], however for the states returned from general_stochastic the dims are [[rows], [cols]] (see above). So this needed to be hacked.

In #1204, it is mentioned that "vector_to_operator is part of the implementation of superoperators", however the state type is oper, not super. It is a density matrix returned by general_stochastic, but still vectorized.

It seems that expect should be able to handle vectorized, unnormalized states, which it does not. And of course the values returned in e_ops should be correct.

Thanks, and let me know if I missed something or need more information.

Code:

from numpy import sqrt

from qutip import destroy, ket2dm, expect, \
                  liouvillian, general_stochastic, \
                  cy, spre, spost, vector_to_operator, basis

times = np.linspace(0, 4, 4096)

N = 32

a = destroy(N) 

x = (a + a.dag()) / sqrt(2)
y = 1j*(a-a.dag()) / sqrt(2)

γ = 1
Ω = 0
χ = 2

H = Ω * a.dag()*a

c_ops = [sqrt(γ)*a, sqrt(χ)*x ]

ρ0 = ket2dm(basis(N))

e_ops = [a.dag()*a, x, x*x, y, y*y]

opts = qutip.solver.Options(store_states=True)

L = liouvillian(H, c_ops=c_ops).data
def D1_ρ(t, ρ_vec):
    return cy.spmv(L, ρ_vec)

n_sum = spre(sqrt(χ)*x) + spost(sqrt(χ)*x.dag())
n_sum_data = n_sum.data

def D2_ρ(t, ρ_vec):
    e1 = cy.cy_expect_rho_vec(n_sum_data, ρ_vec, False)
    out = np.zeros((1,len(ρ_vec)), dtype=complex)
    out += cy.spmv(n_sum_data, ρ_vec) - e1 * ρ_vec
    return out


ntraj = 2

res = general_stochastic(ρ0, times, d1=D1_ρ, d2=D2_ρ,
                          e_ops=[spre(op) for op in e_ops],
                          ntraj=ntraj, solver='explicit1.5',
                          m_ops=[spre(x)], dW_factors=[1/(2*sqrt(χ))],
                          store_measurement=True,
                          options=opts)

itayshom avatar Nov 08 '20 20:11 itayshom

general_stochastic does not know about closed vs open system. It expect the state to be a vector. You need to change the input from a density matrix to a vector with ρ0 = operator_to_vector(ket2dm(basis(N))) for it to work. You also need to promote the e_ops to super operator with spre like you did yourself.

Doing this will also fix the output state dims.

Ericgig avatar Nov 10 '20 18:11 Ericgig

Thanks for the quick reply. I did as you suggested, and it does fix the dimensions. However, there are two new problems.

First, the call to general_stochastic now generates a numpy warning (shown with the full traceback):

  File "c:\program files\python37\lib\site-packages\qutip\stochastic.py", line 1190, in general_stochastic
    res = _sesolve_generic(sso, sso.options, sso.progress_bar)

  File "c:\program files\python37\lib\site-packages\qutip\stochastic.py", line 1302, in _sesolve_generic
    task_args, task_kwargs, **map_kwargs)

  File "c:\program files\python37\lib\site-packages\qutip\parallel.py", line 189, in serial_map
    result = task(value, *task_args, **task_kwargs)

  File "c:\program files\python37\lib\site-packages\qutip\stochastic.py", line 1358, in _single_trajectory
    result = ssolver.cy_sesolve_single_trajectory(i)#, sso)

  File "qutip\cy\stochastic.pyx", line 540, in qutip.cy.stochastic.StochasticSolver.cy_sesolve_single_trajectory

  File "c:\program files\python37\lib\site-packages\qutip\qobj.py", line 303, in __init__
    if not np.any(dims):

  File "<__array_function__ internals>", line 6, in any

  File "c:\program files\python37\lib\site-packages\numpy\core\fromnumeric.py", line 2330, in any
    return _wrapreduction(a, np.logical_or, 'any', axis, None, out, keepdims=keepdims)

  File "c:\program files\python37\lib\site-packages\numpy\core\fromnumeric.py", line 87, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)

VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences 
(which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. 
If you meant to do this, you must specify 'dtype=object' when creating the ndarray

The second problem is more serious, and I think it was the same in the my original version. The expectation values returned in res.expect are not the same as those computed manually from res.states. The values in res.expect gradually (but quickly) diverge away from the correct manual values that agree with theory. It is not a small error. (Note that I'm using a single trajectory so no problem of averaging over trajectories.)

I would expect complete equality between the two. I would suspect that it's something to do with normalization (?) Passing kwarg normalize=True doesn't change anything, nor does the solver.

This is how I compute the expectations:

rho_list = [[vector_to_operator(u) for u in res.states[k]] for k in range(ntraj)]

for k in range(ntraj):
    for i, rho in enumerate(rho_list[k]):
        rho_list[k][i] = rho / rho.tr()

Thanks again. I hope this is the correct venue for these kind of posts, but it does qualify as "strange behavior".

itayshom avatar Nov 13 '20 21:11 itayshom

The code below is a (hopefully) minimum working example that demonstrates the strange behavior of the stochastic solver. I'm simulating a single trajectory using general_stochastic and plotting the returned expectation values along with expectation values calculated from the states. There is also a theoretical solution for the variances of X and Y, which are deterministic despite the stochastic evolution.

The manually calculated expectations match the theory perfectly which means the states are correct. The returned ones accumulate errors and obviously shouldn't be used. I don't see a reason why they should be different -- I would assume the expectations are calculated internally from the states.

Note that the deprecation warning from above still appears.

Output plots

image

Code

# -*- coding: utf-8 -*-

import matplotlib.pyplot as plt
import numpy as np

from numpy import sqrt, exp

import qutip

from qutip import destroy, basis, ket2dm, expect, liouvillian, \
                  general_stochastic, parallel_map, \
                  cy, spre, spost, vector_to_operator, \
                  operator_to_vector


#%%%%%%%% 

times = np.linspace(0, 2, 1024)

N = 16

a = destroy(N) 

x = (a + a.dag()) / sqrt(2)
y = 1j*(a-a.dag()) / sqrt(2)

γ = 1  # damping rate
Ω = 0
χ = 1  # measurement strength

H = Ω * a.dag()*a

c_ops = [sqrt(γ)*a, sqrt(χ)*x]

e_ops = [x, x*x, y, y*y]

ρ0 = ket2dm(basis(N))

opts = qutip.solver.Options(store_states=True)

ntraj = 1
nsubsteps = 1

# defintions for the SME
#
# in literature: dρ(t) = −i[H,ρ(t)]dt + γD[a]ρ(t)dt + dW(t)√γ H[a]ρ(t)
# in QuTiP:      dρ(t) = −i[H,ρ(t)]dt + D1[A]ρ(t)dt + D2[A]ρ(t)dW
#
# H[A]ρ = 0.5 * (Aρ + ρA† − Tr[Aρ+ρA†]ρ)

L = liouvillian(H, c_ops=c_ops).data
def D1_ρ(t, ρ_vec):
    return cy.spmv(L, ρ_vec)

# D2[A]ρ(t) = √γ H[a]ρ(t)

n_sum = spre(sqrt(χ)*x) + spost(sqrt(χ)*x.dag())
n_sum_data = n_sum.data

def D2_ρ(t, ρ_vec):
    e1 = cy.cy_expect_rho_vec(n_sum_data, ρ_vec, False)
    out = np.zeros((1,len(ρ_vec)), dtype=complex)
    out += cy.spmv(n_sum_data, ρ_vec) - e1 * ρ_vec
    return out


res = general_stochastic(operator_to_vector(ρ0), times, d1=D1_ρ, d2=D2_ρ,
                          e_ops=[spre(op) for op in e_ops],
                          ntraj=ntraj, solver='explicit1.5',
                          m_ops=[spre(x)], dW_factors=[1/(2*sqrt(χ))],
                          nsubsteps=nsubsteps, store_measurement=True, normalize=True,
                          noise=123456, options=opts)


# theoretical expressions for the conditional variances

r = χ/γ

VX0 = (sqrt(1+8*r) - 1)/(8*r)

VX = VX0 + 2*VX0*(1 + 1/(8*VX0*r)) / (exp((8*VX0*r+1)*γ*times)*(1+1/(4*r*VX0))**2 - 1)

VY = 0.5 + r - r*exp(-γ*times)


#%%%%%%%% Calculate manually expectation values from states
    
rho_list = [vector_to_operator(rho) for rho in res.states[0]]

for i, rho in enumerate(rho_list):
    rho_list[i] = rho / rho.tr()

my_expect = [expect(op, rho_list) for op in e_ops]


#%%%%%%%% Plotting

plt.figure('Time Evolution 2 -- Quantum State', clear=True)

ax1 = plt.subplot(3,1,1, ylabel=r'$\langle X^2\rangle - \langle X\rangle^2$')
ax2 = plt.subplot(3,1,2, ylabel=r'$\langle Y^2\rangle - \langle Y\rangle^2$')
ax3 = plt.subplot(3,1,3, ylabel=r'$\langle X\rangle$', xlabel='time')

# plot expectation values calculated from states
ax1.plot(times, my_expect[1]-my_expect[0]**2, label=r'$\langle\rangle$ from stored state')
ax2.plot(times, my_expect[3]-my_expect[2]**2, label=r'$\langle\rangle$ from stored state')
ax3.plot(times, my_expect[0], label=r'$\langle\rangle$ from stored state')

# plot expectation values from result.expect
ax1.plot(times, res.expect[1]-res.expect[0]**2, label=r'$\langle\rangle$ from result.expect')
ax2.plot(times, res.expect[3]-res.expect[2]**2, label=r'$\langle\rangle$ from result.expect')
ax3.plot(times, res.expect[0], label=r'$\langle\rangle$ from result.expect')

ax1.plot(times, VX, 'k--', label='theory')
ax2.plot(times, VY, 'k--', label='theory')

ax1.legend()
ax2.legend()
ax3.legend()

ax1.set_xticks([])
ax2.set_xticks([])

plt.tight_layout()

itayshom avatar Dec 01 '20 18:12 itayshom

General stochastic has been removed.

Ericgig avatar Mar 29 '24 05:03 Ericgig