tensorcircuit icon indicating copy to clipboard operation
tensorcircuit copied to clipboard

Allow various `ParameterExpression`s for qiskit to tc conversion

Open king-p3nguin opened this issue 10 months ago • 4 comments

  • The current implementation for converting a Qiskit circuit to a Tensorcircuit circuit does not allow Qiskit's circuit to involve gates with multiple parameters or use NumPy's ufunc. sympy.lambdify can fix this.
  • qiskit.circuit.bit.Bit.index is deprecated.
  • test_qiskit2tc_parameterized does not seem to be working due to the deprecation of QubitConverter. Removed QubitConverter from the test.

I noticed that qiskit-nature is commented on requirements-extra.txt. Is it okay that I uncommented it?

king-p3nguin avatar Apr 06 '24 20:04 king-p3nguin

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 75.68%. Comparing base (b042a74) to head (0565d5e).

:exclamation: Current head 0565d5e differs from pull request most recent head 77721d4. Consider uploading reports for the commit 77721d4 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #207      +/-   ##
==========================================
+ Coverage   75.52%   75.68%   +0.15%     
==========================================
  Files          67       67              
  Lines       10804    10801       -3     
==========================================
+ Hits         8160     8175      +15     
+ Misses       2644     2626      -18     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Apr 06 '24 21:04 codecov[bot]

I noticed that qiskit-nature is commented on requirements-extra.txt. Is it okay that I uncommented it?

As long as the CI works well :) I commented the package due to some CI incompatibility which might have been resolved now.

refraction-ray avatar Apr 07 '24 02:04 refraction-ray

If I just use algebraic operations like +-*/ with modules="numpy", there is no problem, but if I use numpy's ufunc, pytorch, jax, or tensorflow all seem to fail, unfortunately. I changed the code so that using the pytorch backend with algebraic operations only still works.

king-p3nguin avatar Apr 07 '24 07:04 king-p3nguin

If I just use algebraic operations like +-*/ with modules="numpy", there is no problem, but if I use numpy's ufunc, pytorch, jax, or tensorflow all seem to fail, unfortunately. I changed the code so that using the pytorch backend with algebraic operations only still works.

Yes, I just realized another scenario that prefers module=backend.name, i.e. when the translation function is used in a jitted function. However, I think we can directly fallback torch backend to numpy even for non algebraic operations.

At least for the test case in the newly added tests, I tried the following (numpy module works for torch tensor even with non-algebraic ops):

Screen Shot 2024-04-07 at 3 57 36 PM

refraction-ray avatar Apr 07 '24 07:04 refraction-ray

It seems like using modules="numpy" with pytorch backend has a problem when used with grad, but modules="math" seems to be working.

{
	"name": "RuntimeError",
	"message": "Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.",
	"stack": "---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[1], line 52
     50 params = tc.backend.convert_to_tensor(params)
     51 print(\"new :\", params)
---> 52 grad = tc.backend.grad(cost_fn)(
     53     params,
     54 )
     55 print(np.isnan(grad))
     56 assert tc.backend.sum(np.isnan(grad)) == 0

File ~/tensorcircuit/tensorcircuit/backends/pytorch_backend.py:504, in PyTorchBackend.grad.<locals>.wrapper(*args, **kws)
    503 def wrapper(*args: Any, **kws: Any) -> Any:
--> 504     y, gr = self.value_and_grad(f, argnums, has_aux)(*args, **kws)
    505     if has_aux:
    506         return gr, y[1:]

File ~/tensorcircuit/tensorcircuit/backends/pytorch_backend.py:541, in PyTorchBackend.value_and_grad.<locals>.wrapper(*args, **kws)
    539 def wrapper(*args: Any, **kws: Any) -> Any:
    540     gavf = torchlib.func.grad_and_value(f, argnums=argnums, has_aux=has_aux)
--> 541     g, v = gavf(*args, **kws)
    542     return v, g

File ~/.local/share/virtualenvs/tensorcircuit-BMvyhJJt/lib/python3.11/site-packages/torch/_functorch/vmap.py:44, in doesnt_support_saved_tensors_hooks.<locals>.fn(*args, **kwargs)
     41 @functools.wraps(f)
     42 def fn(*args, **kwargs):
     43     with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 44         return f(*args, **kwargs)

File ~/.local/share/virtualenvs/tensorcircuit-BMvyhJJt/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py:1256, in grad_and_value.<locals>.wrapper(*args, **kwargs)
   1253 diff_args = _slice_argnums(args, argnums, as_tuple=False)
   1254 tree_map_(partial(_create_differentiable, level=level), diff_args)
-> 1256 output = func(*args, **kwargs)
   1257 if has_aux:
   1258     if not (isinstance(output, tuple) and len(output) == 2):

Cell In[1], line 44, in cost_fn(params)
     41 def cost_fn(params):
     42     return tc.backend.real(
     43         tc.backend.sum(
---> 44             get_unitary(params),
     45         ),
     46     )

Cell In[1], line 29, in get_unitary(params)
     27 @tc.backend.jit
     28 def get_unitary(params):
---> 29     return tc.Circuit.from_qiskit(
     30         ansatz, inputs=np.eye(2**n), binding_params=params
     31     ).state()

File ~/tensorcircuit/tensorcircuit/abstractcircuit.py:889, in AbstractCircuit.from_qiskit(cls, qc, n, inputs, circuit_params, binding_params)
    886 if n is None:
    887     n = qc.num_qubits
--> 889 return qiskit2tc(  # type: ignore
    890     qc,
    891     n,
    892     inputs,
    893     circuit_constructor=cls,
    894     circuit_params=circuit_params,
    895     binding_params=binding_params,
    896 )

File ~/tensorcircuit/tensorcircuit/translation.py:474, in qiskit2tc(qc, n, inputs, is_dm, circuit_constructor, circuit_params, binding_params)
    472 idx = [qc.find_bit(qb).index for qb in gate_info.qubits]
    473 gate_name = gate_info[0].name
--> 474 parameters = _translate_qiskit_params(gate_info, binding_params)
    475 if gate_name in [
    476     \"h\",
    477     \"x\",
   (...)
    490     \"cz\",
    491 ]:
    492     getattr(tc_circuit, gate_name)(*idx)

File ~/tensorcircuit/tensorcircuit/translation.py:401, in _translate_qiskit_params(gate_info, binding_params)
    395     sympy_symbols = [
    396         sympy.Symbol(str(symbol).replace(\"[\", \"_\").replace(\"]\", \"\"))
    397         for symbol in sympy_symbols
    398     ]
    399     lam_f = sympy.lambdify(sympy_symbols, expr, modules=lambdify_module_name)
    400     parameters.append(
--> 401         lam_f(*[binding_params[param.index] for param in parameter_list])
    402     )
    403 else:
    404     # numbers, arrays, etc.
    405     parameters.append(p)

File <lambdifygenerated-2>:2, in _lambdifygenerated(φ_0, φ_1, φ_2)
      1 def _lambdifygenerated(φ_0, φ_1, φ_2):
----> 2     return exp(sin(φ_0)) + abs(φ_1)/arctan(φ_2)

File ~/.local/share/virtualenvs/tensorcircuit-BMvyhJJt/lib/python3.11/site-packages/torch/_tensor.py:1062, in Tensor.__array__(self, dtype)
   1060     return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
   1061 if dtype is None:
-> 1062     return self.numpy()
   1063 else:
   1064     return self.numpy().astype(dtype, copy=False)

RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead."
}

king-p3nguin avatar Apr 07 '24 12:04 king-p3nguin

It seems like using modules="numpy" with pytorch backend has a problem when used with grad, but modules="math" seems to be working.

Cool, thanks for your careful investigation! The PR now LGTM

refraction-ray avatar Apr 08 '24 01:04 refraction-ray

@all-contributors please add @king-p3nguin for test, doc

refraction-ray avatar Apr 08 '24 01:04 refraction-ray

@refraction-ray

I've put up a pull request to add @king-p3nguin! :tada:

allcontributors[bot] avatar Apr 08 '24 01:04 allcontributors[bot]