qadence icon indicating copy to clipboard operation
qadence copied to clipboard

[Bug] wrong GPSR derivatives for single-qubit HamEvo

Open jpmoutinho opened this issue 9 months ago • 0 comments

import torch
from qadence import DiffMode, expectation, add, kron
from qadence import X, Y, Z, RX, HamEvo, FeatureParameter

x = FeatureParameter("x")

n_qubits = 2   # For n_qubits > 1 it works, for n_qubits = 1 results are wrong
scaling = 4  # The wrongness of the results changes with this scaling when it is multiplying the generator and not the parameter

# Digital Block
digital_block = kron(RX(i, scaling * x) for i in range(n_qubits))

# Equivalent HamEvo Block
gen = add(X(i) for i in range(n_qubits))
#hamevo_block = HamEvo(0.5*gen, scaling*x)  # This works better
hamevo_block = HamEvo(0.5*scaling*gen, x)  # This works worse

# Create concrete values for the parameter we want to differentiate
xs = torch.linspace(0, 2*torch.pi, 100, requires_grad=True)
values = {"x": xs}

# Calculate function f(x)
obs = add(Z(i) for i in range(n_qubits))
exp_ad = expectation(digital_block, observable = obs, values=values, diff_mode=DiffMode.AD)
exp_gpsr_dig = expectation(digital_block, observable = obs, values=values, diff_mode=DiffMode.GPSR)
exp_gpsr_ham = expectation(hamevo_block, observable = obs, values=values, diff_mode=DiffMode.GPSR)

# Check we are indeed computing the same thing
assert (torch.allclose(exp_ad, exp_gpsr_dig) and torch.allclose(exp_ad, exp_gpsr_ham))

# calculate derivative df/dx using the PyTorch
dfdx_ad = torch.autograd.grad(exp_ad, xs, torch.ones_like(exp_ad), create_graph=True)[0]
dfdx_gpsr_dig = torch.autograd.grad(exp_gpsr_dig, xs, torch.ones_like(exp_gpsr_dig), create_graph=True)[0]
dfdx_gpsr_ham = torch.autograd.grad(exp_gpsr_ham, xs, torch.ones_like(exp_gpsr_ham), create_graph=True)[0]

import matplotlib.pyplot as plt

fig, ax = plt.subplots()
#ax.plot(xs.detach(), exp_ad.detach(), label="f(x)")
ax.scatter(xs.detach(), dfdx_gpsr_dig.detach(), s=20, label="df/dx GPSR Digital")
ax.scatter(xs.detach(), dfdx_gpsr_ham.detach(), s=10, label="df/dx GPSR HamEvo")
ax.plot(xs.detach(), dfdx_ad.detach(), label="df/dx AD")
plt.legend()

One example of a wrong result: image

This is somehow related to the definition of generator (the paper has a 1/2 factor, HamEvo does not).

Observations:

  • Multiplying HamEvo eigenvalues by 2, and setting the default shift in the single_gap_psr function to PI/4 partially solves the problem, but then the gpsr derivative in the example above always diverges for scaling = 4.

jpmoutinho avatar May 10 '24 11:05 jpmoutinho