pennylane icon indicating copy to clipboard operation
pennylane copied to clipboard

[BUG] `Sum.terms()` doesn't re-package the coefficients

Open KetpuntoG opened this issue 10 months ago • 11 comments

Expected behavior

(after discussion with @Qottmann )

The problem is that Sum.terms() doesn't neatly re-package the coefficients but just puts them in a list. This is a problem since Sum and LinearCombination have different behaviors (causing me problems with differentiability). The expected behavior would be:

coeffs = pnp.array([0.5], requires_grad=True)

op = qml.dot(coeffs, [X(0)])
op.terms()[0]

tensor([0.5], requires_grad=True)

Actual behavior

coeffs = pnp.array([0.5], requires_grad=True)

op = qml.dot(coeffs, [X(0)])
op.terms()[0]

[tensor(0.5, requires_grad=True)]

Additional information

No response

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.36.0.dev0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           Linux-6.1.58+-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.25.2
Scipy version:           1.11.4
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.35.1)
- default.clifford (PennyLane-0.36.0.dev0)
- default.gaussian (PennyLane-0.36.0.dev0)
- default.mixed (PennyLane-0.36.0.dev0)
- default.qubit (PennyLane-0.36.0.dev0)
- default.qubit.autograd (PennyLane-0.36.0.dev0)
- default.qubit.jax (PennyLane-0.36.0.dev0)
- default.qubit.legacy (PennyLane-0.36.0.dev0)
- default.qubit.tf (PennyLane-0.36.0.dev0)
- default.qubit.torch (PennyLane-0.36.0.dev0)
- default.qutrit (PennyLane-0.36.0.dev0)
- null.qubit (PennyLane-0.36.0.dev0)

Existing GitHub issues

  • [X] I have searched existing GitHub issues to make sure the issue does not already exist.

KetpuntoG avatar Apr 12 '24 15:04 KetpuntoG

Would have to do an appropriate re-packaging in https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/ops/op_math/sum.py#L445 and https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/ops/op_math/sum.py#L459 (same for Prod).

I actually dont understand how to do that with qml.math, as I keep "losing" the requires_grad attribute 🤔

>>> coeffs = [pnp.array(0.5, requires_grad=True)]
>>> coeffs_new = qml.math.array(coeffs)
>>> coeffs, coeffs_new
([tensor(0.5, requires_grad=True)], array([0.5]))

(same for asarray

Qottmann avatar Apr 12 '24 15:04 Qottmann

Also, what happens if one coefficient requires grad and the other doesn't?

albi3ro avatar Apr 12 '24 20:04 albi3ro

Also, what happens if one coefficient requires grad and the other doesn't?

hmm I am not sure which one should be the correct one but the important thing is that it is common in both cases I would say

KetpuntoG avatar Apr 15 '24 13:04 KetpuntoG

Actually this produces an SProd, not a Sum

coeffs = pnp.array([0.5], requires_grad=True)
op = qml.dot(coeffs, [X(0)])

astralcai avatar Apr 22 '24 17:04 astralcai

@KetpuntoG Would you be able to provide more context regarding when this causes an issue?

astralcai avatar Apr 22 '24 18:04 astralcai

I'm creating the template qml.Qubitization that takes as input a Hamiltonian. In this line I wrote:

coeffs, ops = hamiltonian.terms()

If the Hamiltonian is a LinearCombination, this will be differentiable because coeffs = np.array([......], requires_grad = True). However, if I use qml.dot to define the Hamiltonian, this will not be differentiable because coeffs = [np.array(, True), np.array(, True),....] that I cannot put together again to make it differentiable. (We tried hstackbut it doesn't work). Maybe is related with this?

KetpuntoG avatar Apr 22 '24 19:04 KetpuntoG

I'm creating the template qml.Qubitization that takes as input a Hamiltonian. In this line I wrote:

coeffs, ops = hamiltonian.terms()

If the Hamiltonian is a LinearCombination, this will be differentiable because coeffs = np.array([......], requires_grad = True). However, if I use qml.dot to define the Hamiltonian, this will not be differentiable because coeffs = [np.array(, True), np.array(, True),....] that I cannot put together again to make it differentiable. (We tried hstackbut it doesn't work). Maybe is related with this?

Thank you for the context! I'll look into this.

astralcai avatar Apr 22 '24 19:04 astralcai

I'm creating the template qml.Qubitization that takes as input a Hamiltonian. In this line I wrote:

coeffs, ops = hamiltonian.terms()

If the Hamiltonian is a LinearCombination, this will be differentiable because coeffs = np.array([......], requires_grad = True). However, if I use qml.dot to define the Hamiltonian, this will not be differentiable because coeffs = [np.array(, True), np.array(, True),....] that I cannot put together again to make it differentiable. (We tried hstackbut it doesn't work). Maybe is related with this?

If the problem with hstack is that it loses requires_grad, would something like

qml.math.convert_like(qml.math.hstack(coeffs), coeffs[0])

fix the issue?

astralcai avatar Apr 22 '24 20:04 astralcai

qml.math.convert_like(qml.math.hstack(coeffs), coeffs[0])

Nice! This is what I was looking for! Thanks! @KetpuntoG you can use that in your code when you call terms to re-package the coeffs, but we should add that as well to Prod.terms(), Sum.terms() as well as SProd.terms().

Qottmann avatar Apr 23 '24 07:04 Qottmann

Nice! This is what I was looking for! Thanks! @KetpuntoG you can use that in your code when you call terms to re-package the coeffs, but we should add that as well to Prod.terms(), Sum.terms() as well as SProd.terms().

In this case terms() will now always return an array instead of a list for the coefficients. Is that expected?

astralcai avatar Apr 23 '24 14:04 astralcai

I could not get it to work. We can try to fix it once the Sum has better grad support :)

KetpuntoG avatar Apr 25 '24 20:04 KetpuntoG