Tensor Hypercontraction with Qubitized QPE Produces Different Results from Reference Paper
I ran the example circuit provided in the https://qualtran.readthedocs.io/en/latest/bloqs/phase_estimation/qubitization_qpe.html However, the results I obtained do not match the reference paper cited in the documentation. Specifically:
- Toffoli counts are significantly higher from those in the reference. I got 1300 trillion, in paper 33 billion.
- Qubit counts are very lower than expected values.I got 260 qubits, in paper around 2000 qubits.
I followed the standard setup as described, but the discrepancy persists. Could this be due to an implementation difference, a missing step in the documentation, or an issue with the default parameters?.
Given the lower qubit count and higher toffoli count my guess is that the correct QROAM is not being used (I believe https://github.com/quantumlib/Qualtran/pull/1378 is not in v0.5 of qualtran) so you may need to use the main branch of qualtran to get the correct cost. I'm curious to see if the qubit counts come out accurately as that's a relatively new feature and isn't something I checked!
I tried using the updated prepareTHC method along with the main branch of Qualtran, but I’m still getting the same results—lower qubit count and higher Toffoli count.
For reference, here’s the code I used:
num_spinorb = 152
num_bits_state_prep = 10
num_bits_rot = 20
num_mu = 450
num_spat = num_spinorb // 2
qroam_blocking_factor = np.power(2, QI(num_mu + num_spat)[0])
t_l, eta, zeta = build_random_test_integrals(num_mu, num_spinorb // 2, seed=7)
walk = get_walk_operator_for_thc_ham(
t_l,
eta,
zeta,
num_bits_state_prep=num_bits_state_prep,
num_bits_theta=num_bits_rot,
kr1=qroam_blocking_factor,
kr2=qroam_blocking_factor,
)
algo_eps = 0.0016
qpe_eps = algo_eps / (walk.block_encoding.alpha * 2**0.5)
qubitization_qpe_chem_thc = QubitizationQPE(
walk, LPResourceState.from_standard_deviation_eps(qpe_eps)
)
print(qubitization_qpe_chem_thc.t_complexity())
print(qubitization_qpe_chem_thc.signature.n_qubits())
Below is the setting used in get_walk_operator_for_thc_ham method.
prep = PrepareTHC.from_hamiltonian_coeffs(t_l, eta, zeta, num_bits_state_prep,log_block_size=2)
num_mu = zeta.shape[-1]
num_spin_orb = 2 * len(t_l)
sel = SelectTHC(num_mu, num_spin_orb, num_bits_theta, prep.keep_bitsize, kr1=kr1, kr2=kr2)
block_encoding = SelectBlockEncoding(select=sel, prepare=prep)
walk_op = QubitizationWalkOperator(block_encoding=block_encoding)
Can't say anything about the qubits, but maybe the Toffoli counts are orders of magnitude higher than in the paper because you are using random t_l, eta, zeta instead of those actually computed using tensor hypercontraction. For example, in Table IV of the THC paper, the one-norms for different THC implementations of the Reiher et al. FeMoCo Hamiltonian are around 300, but if you take random t_l, eta, zeta like here and calculate the one-norm from them, you will get a one-norm of around 4 million.
I'll look into this
@vinayswamik how big a difference are you seeing? The lambda value will be grossly wrong if used directly from the randomly generated coefficients, but this only matters for phase estimation. The block encoding costs is off by about 300 Toffolis (from https://qualtran.readthedocs.io/en/latest/bloqs/chemistry/resource_estimation.html#block-encoding-bloqs) and the walk operator is pretty much this cost value + another reflection: (https://github.com/quantumlib/Qualtran/blob/c7403857774474ec3e50d0c3ee4953fbcce077bf/qualtran/bloqs/phase_estimation/phase_estimation_of_quantum_walk.ipynb), is this the magnitude of the discrepancy you're talking about?
The source of the disagreement is outlined here: https://github.com/quantumlib/Qualtran/issues/390, now might be a good time for me to revisit this given that the library has evolved so much since then.