pennylane icon indicating copy to clipboard operation
pennylane copied to clipboard

Performance: use in-place matrix operations

Open albi3ro opened this issue 2 years ago • 1 comments

For manipulations involving large matrices, we want to cut down on memory allocation whenever possible. One way to do that is by using in-place manipulations instead.

>>> mat1 += mat2
>>> mat1 @= mat2

One downside to using in-place operations is that we can't perform type conversions. The original mat1 must have an equal or more-precise type to mat2. Therefore, we would need to standardize Operator.matrix to return complex128. or perform type casting manually.

For the example:

H = qml.op_sum(
    qml.s_prod(0.5, qml.PauliX(0)),
    qml.s_prod(1.2, qml.prod(qml.PauliY(1), qml.PauliX(0))),
    qml.s_prod(3.4, qml.prod(qml.PauliZ(3), qml.PauliZ(2), qml.PauliY(0))),
    qml.s_prod(2.3, qml.prod(qml.PauliZ(4), qml.PauliY(5)))
)

And timing H.matrix(), I am getting 947 µs ± 106 µs per loop for this branch and 1.26 ms ± 497 µs per loop for master.

For comparison: qml.utils.sparse_hamiltonian(H_old).todense() takes 1.19 ms ± 4.86 µs per loop .

There may be other downsides to this that I am not yet aware of.

albi3ro avatar Sep 12 '22 15:09 albi3ro

Hello. You may have forgotten to update the changelog! Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

github-actions[bot] avatar Sep 12 '22 15:09 github-actions[bot]

Didn't seem to make much of a difference.

albi3ro avatar Jan 10 '23 11:01 albi3ro