qiskit icon indicating copy to clipboard operation
qiskit copied to clipboard

Faster `CouplingMap` initialization when `couplinglist` is a numpy array.

Open biswash47 opened this issue 1 year ago • 4 comments

What should we add?

The CouplingMap class in /qiskit/transpiler/coupling.py accepts a coupling list for initialization. The coupling list is expected to be a sequence of pairs of coupled qubits. In the __init__() method, the given sequence is iterated over with list comprehension to create a list of tuples. https://github.com/Qiskit/qiskit/blob/8acc76606f368db619ccee9b65aef4a995ef89f6/qiskit/transpiler/coupling.py#L72 This list comprehension seems to be quite slow for numpy arrays. Below is the comparison of the runtimes of the default list comprehension and an alternative one inside a numba jit-ed function:

Setup

import numpy as np
from qiskit.transpiler import CouplingMap
import CouplingMap_Alt

# chain of qubits
couplings = np.array(list(zip(range(20), range(1, 20))))

Default

%%timeit
cmap = CouplingMap(couplings)

# 18.2 µs ± 4 µs per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

Alternative

%%timeit
cmap = CouplingMap_Alt(couplings)

# 3.06 µs ± 110 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

The CouplingMap_Alt class uses the following function to create a list of tuples

import numba as nb
@nb.njit
def to_tuple(array):
    return [(e[0], e[1]) for e in array]

and calls this function in the __init__() method as

if coupling_arr is not None:
            self.graph.extend_from_edge_list(to_tuple(couplinglist))

biswash47 avatar Feb 29 '24 11:02 biswash47

Is the construction time of CouplingMap causing you any noticeable bottlenecks in a complete script?

The reason I'm asking is mostly because I'd expect any operation that uses a coupling map to be much much slower than the construction time. Numba is a very complex dependency for a small speedup, and actually in this case, there'd still be an inefficiency in constructing a new list of tuples at all; if this is a performance bottleneck for you, we'd probably want to look at expanding the Rustworkx method PyDigraph::extend_from_edge_list to accept an arbitrary Python object/sequence that it iterates through directly rather than requiring temporary structures.

If it helps, your specific example here is equivalent to CouplingMap.from_line(20), which completes in about 1.5 µs for me - faster than the Numba version.

jakelishman avatar Feb 29 '24 11:02 jakelishman

You may also find that calling ndarray.tolist() before passing your Numpy array to CouplingMap brings your performance to acceptable levels for you as well - that lets Numpy do a conversion to a list of lists from within itself in its C code, avoid the per-element iteration through a Numpy array (which has some unavoidable overhead when happening from Python-space calls). For example, changing your call to

CouplingMap(couplings.tolist())

reduces the timings from 16µs to 4.5µs on my machine - not quite as fast as Numba (there's one more intermediate structure), but possibly fast enough for you?

jakelishman avatar Feb 29 '24 11:02 jakelishman

The numba function was just an example. I did not mean to imply that numba be added as a dependency.

In light of moveable neutral-atom qubits (a la https://arxiv.org/pdf/2306.03487.pdf), the overhead can become significant, especially when physical movements of qubits are considered for mapping quantum circuits on to a neutral atom hardware, for example.

@jakelishman maybe the feature should considered for rustworkx then, as you mentioned, so that graph edges can be directly constructed from a numpy array.

biswash47 avatar Feb 29 '24 12:02 biswash47

The more edges are in the coupling map, the more expensive any operations on the coupling map are going to be as well - in these cases, is the construction time of the CouplingMap still noticeable compared to the runtime of algorithms using it?

You can file this (well, more specifically about PyDiGraph::extend_from_edge_list) as a request over at Qiskit/rustworkx if you like. I'll leave this issue open because if Rustworkx do accept a patch, we'd need to modify our code on this side to use it as well.

jakelishman avatar Feb 29 '24 12:02 jakelishman