tqec
tqec copied to clipboard
New input: `PyZX -> TQEC`
We want to take any output of PyZX, including its Pauli webs, and use any method to get it into tqec and simulate it. We are specifically not trying to find correlation surfaces automatically, rather we want to use those provided.
We would start with Clifford+T circuits that are BaseGraph objects in pyzx.
EDIT: rewrote goal to focus on using correlation surfaces found by PyZX, broader discussions about PyZX interoperability can also happen here.
~~I believe one would compile to a TQEC ZXGraph, but this is up for discussion.~~
Resources:
[1] Jan 7 meeting recording
[2] Jan 7 Slides
[3] pyzx documentation
[4] pyzx paper
[5] Aleks Kissinger's notebook
[6] Google group discussion with subject "March 26: a presentation on TQEC to Google"
[7] PyZX Pauli web module: https://github.com/zxcalc/pyzx/blob/master/pyzx/pauliweb.py
Replacing ZXGraph in tqec with the native PyZX graphs seems like a better option to me. They essentially perform the same function, and I believe PyZX supports directly constructing correlation surfaces (Pauli webs), at least as demonstrated by Aleks.
As a starting point, we could retain the explicit 3D coordinates in the PyZX graph by attaching time coordinates as data to the spider. Once we have a working compilation tool for converting from ZX representation to BlockGraph, we can remove the time coordinates and compile a flattened, optimized PyZX graph into the BlockGraph.
Hi Kabir, you can follow this to create a new milestone, though there will might be permission issue.
Right, as discussed over email, I have some files to convert PyZX to JSON. I then use the JSON output for 3D shenanigans elsewhere. So, not fully PyZX -> TQEC, but related.
I have one branch committed locally and ready to publish. Just three files, into the src/tqec/interop (i imagine that means interoperability) folder:
- The main file with functions that do things
- An example.py with a usage example
- A JSON schema. I reckon this is the most important file as it makes interoperability data needs concrete. And if there is a way to get better information out of PyZX, by all means!
Except... I couldn't publish the branch because I don't have the rights to publish a branch into this repository.
I have one branch committed locally and ready to publish. Just three files, into the src/tqec/interop (i imagine that means interoperability) folder
I think the easiest way to proceed is to open a draft PR directly in the tqec repository. This will allow us to quickly experiment and provide feedback without focusing too much on code quality. If we decide to go ahead with this approach, we can then start working on improving the code.
Except... I couldn't publish the branch because I don't have the rights to publish a branch into this repository.
I’ve sent you an invitation to join the organization. Let me know if you still have any issues.
I have one branch committed locally and ready to publish. Just three files, into the
src/tqec/interop(i imagine that means interoperability) folder:* The main file with functions that do things * An example.py with a usage example * A JSON schema. I reckon this is the most important file as it makes interoperability data needs concrete. And if there is a way to get better information out of PyZX, by all means!Except... I couldn't publish the branch because I don't have the rights to publish a branch into this repository.
If you received the invitation from @inmzhang and accept it, then you should be able to publish in the tqec repository. In principle, for new contributors, it is better to fork the repository and create a PR from your forked repository. You can do that by:
- Creating the fork on your personal GitHub account: https://github.com/tqec/tqec/fork.
- Getting the URL of the fork, probably https://github.com/jbolns/tqec
- Going into your local repository and adding a remote:
git remote add [name of the remote] https://github.com/jbolns/tqec.git(or using SSH:git remote add [name of the remote] [email protected]/jbolns/tqec.git). - Checking out your local branch
git checkout [name of your local branch] - Publish it on your fork
git push -u [name of the remote] [name of the newly created branch on your remote].
That's easier and makes sense, especially being a contribution to thinking about how to approach an emerging feature rather than a bug fix or improv of an existing feature.
Invite. I got the invite and accepted. I saw a couple other things maybe I can contribute to. Plus, I like to be an optimist. Or, in more famous words, "I'm feeling lucky".
Repositories.
I'm not going to fork the repository. I do not like to have forked repositories on my profile. But I published both things as independent repositories on my profile.
You can find them here:
- Data extraction from PyZX: https://github.com/jbolns/pyzx_json
- Visualisation of the said data: https://github.com/jbolns/tqec_vis
... Do follow. I like smart people.
Replacing
ZXGraphin tqec with the native PyZX graphs seems like a better option to me. They essentially perform the same function, and I believe PyZX supports directly constructing correlation surfaces (Pauli webs), at least as demonstrated by Aleks.
This makes sense, I agree.
As a starting point, we could retain the explicit 3D coordinates in the PyZX graph by attaching time coordinates as data to the spider. Once we have a working compilation tool for converting from ZX representation to
BlockGraph, we can remove the time coordinates and compile a flattened, optimized PyZX graph into theBlockGraph.
A pyzx BaseGraph does not have 3D coordinates, could you clarify? Is the idea that we assign each vertex of a PyZX graph a time coordinate that indicates the order in which we build the blocks corresponding to each node?
For example, consider slide 15. We can enumerate each operation going left to right for each qubit going top to down, like*:
(1,2,3,4 5,6,7,8, 9,10,11,12),
and then add BlockGraph nodes and edges in that order. Given a block assembly schedule and initial starting 3D position, one may pre-compute the 3D positions of the entire quantum circuit. In my understanding, the tool should be able to compile an arbitrary ZX Graph, even if it's not optimized. (*this scheduling is just an example--I'm not saying it's the best one.)
Another reason that it seems we should intentionally assign time coordinates is for non-Clifford operations (quoting one of Austin's emails):
Non-Clifford gates contain measurements that control the presence of future gates. These future gates also affect how you interpret future measurements which in turn control yet more future gates, imposing a time ordering ... To say the same thing using the language of a ZX graph, non-Clifford nodes are associated with a correlation surface/Pauli web whose parity controls the color of a future node, and future non-Clifford nodes can have correlation surfaces/Pauli webs that touch this node with indeterminate color, setting a time ordering. You can actually see an example of this on the first slide of this week's presentation. The later pi/4 (non-Clifford) node has a Pauli web that touches two earlier pi/4 nodes. These earlier nodes must be implemented before the indicated later node.
A
pyzxBaseGraph does not have 3D coordinates, could you clarify? Is the idea that we assign each vertex of a PyZX graph a time coordinate that indicates the order in which we build the blocks corresponding to each node?
By Aleks:
- in PyZX, positions are 2D. Every spider has a coordinate (q,r), which are referred to a "qubit" and "row". If the ZX diagram came from a circuit, this is indeed the qubit and row where the spider occurs, but otherwise you can just think of these as "y" and "x"
- PyZX allows attaching arbitrary additional data to a node with string keys. So, this is one way you could store the third dimension. The backend architecture should be able to handle this pretty seamlessly
In my understanding, the tool should be able to compile an arbitrary ZX Graph, even if it's not optimized.
Finally, yes. As a starting point, attaching explicit 3D layout data to the PyZX graph would be the simplest approach.
Another reason that it seems we should intentionally assign time coordinates is for non-Clifford operations (quoting one of Austin's emails):
I believe this time information will be included in the Pauli Web computed by PyZX.
I'll start looking into how to replace ZXGraph data structure with PyZX graph in the next few days.
By Aleks:
* in PyZX, positions are 2D. Every spider has a coordinate (q,r), which are referred to a "qubit" and "row". If the ZX diagram came from a circuit, this is indeed the qubit and row where the spider occurs, but otherwise you can just think of these as "y" and "x" * PyZX allows attaching arbitrary additional data to a node with string keys. So, this is one way you could store the third dimension. The backend architecture should be able to handle this pretty seamlessly
Oh I see, this makes sense. I couldn't find this info online, could you share a link if it's publicly available? I'm not sure if these attributes are documented in PyZX.
I believe this time information will be included in the Pauli Web computed by PyZX.
I agree in the sense that PyZX labels non-Clifford nodes with respect to the 'future' nodes that they control. I'll follow your initial work and we can come back to this if removing time coordinates poses a problem.
Oh I see, this makes sense. I couldn't find this info online, could you share a link if it's publicly available? I'm not sure if these attributes are documented in PyZX.
This is not publicly documented and is quoted from an email message from Aleks, but you can look through the PyZX code or API to find these information.
Based on the recent demo, a full visualisation companion is not (yet) within reach. Functionality is there, but I truly struggled with the controls. They're not intuitive at all. Ooops.
Having said that, part of what I do to get data out of PyZX seems related to this:
By Aleks:
- in PyZX, positions are 2D. Every spider has a coordinate (q,r), which are referred to a "qubit" and "row". If the ZX diagram came from a circuit, this is indeed the qubit and row where the spider occurs, but otherwise you can just think of these as "y" and "x"
- PyZX allows attaching arbitrary additional data to a node with string keys. So, this is one way you could store the third dimension. The backend architecture should be able to handle this pretty seamlessly
Getting dimensions from PyZX. Function below extracts the dimensions I found available in PyZX to JSON. There are 3 or 4 other "to_***" functions with similar effect but to other formats. Link to docs.
# (`g` is a PyZX graph)
def jsonify(g):
"""Extracts graph data into JSON format"""
json_g = json.loads(g.to_json())
return json_g
Adding arbitrary 3D data. Plan was to eventually transform 3D models back to PyZX using extra fields and PyZX's g.from_json(). Link to docs.
def reformat_json(json):
"""Add arbitrary data to JSON in a format compatible with PyZX '.from_json()'"""
import copy
rebuilt_json = copy.deepcopy(json_g)
for key in ["wire_vertices", "node_vertices"]:
for vertex in rebuilt_json[key]:
rebuilt_json[key][vertex]["annotation"]["extra"] = {
"position_z": position_z,
}
return rebuilt_json
def import_json(json_string):
"""Imports a JSON"""
g = zx.Graph()
graph = g.from_json(json_string)
return graph
# Ps. I'm omitting some dictionary -> string transformations I have elsewhere, for space. Reformat takes a dict. Import takes a string.
Expected behaviour. When designated as an extra field, PyZX's imports the full thing and ignores the extra field when building the graph, but the data is stored and remains there:
# Extract of JSON exporting before arbitrary info is imported via "g.from_JSON()".
"b0": {
"annotation": {
"boundary": true,
"coord": [
0,
0
],
"input": 0,
"name": "b0"
}
}, ...
# Re-export after arbitrary info is imported via "g.from_JSON()".
"b0": {
"annotation": {
"boundary": true,
"coord": [
0,
0
],
"input": 0,
"name": "b0",
"extra": {
"position_z": 1
}
}
}, ...
I assume there are other ways to add the extra fields, but that's one way.
Hope it helps somewhat.
Linking #315.LaSsynth seems like a good way to synthesize a pyZX graph as BlockGraph.
Linking #315.
LaSsynthseems like a good way to synthesize apyZXgraph asBlockGraph.
Yes, I think it would be a good starting point for block synthesis. One challenge I'm aware of is that LaSsyhth requires explicitly specifying the locations of the ports (inputs/outputs) and the target spacetime volume. This means the process would involve iterating between synthesis, adjusting port locations, and squeezing or expanding the spacetime volume to achieve a valid or optimized block structure. While this could take a considerable amount of time, it should definitely be a viable approach for achieving a working synthesis.
I'll start looking into how to replace
ZXGraphdata structure with PyZX graph in the next few days.
I'm working on this. Since it's quite a large refactoring and I'm on the holiday of the Chinese Spring Festival, it may take a bit more time.
I'm working on this. Since it's quite a large refactoring and I'm on the holiday of the Chinese Spring Festival, it may take a bit more time.
Yes, please take your time, there's no rush. I'm working on other Issues and am only updating this issue for tracking purposes.
Hi, thanks for working on integrating PyZX into TQEC. PyZX has a draw_3d() function that is in progress. In addition to the (q,r) co-ordinate specified above, they are using g.set_vdata(node_ids, 'z', z_position) to specify the second spatial dimension. I have a private functional code that implements a surface code encoder circuit in PyZX and managed to use their draw_3d function. Although I prefer TQEC's previous ZXGraph visualisation of Pauli webs.
In my last email correspondence with John and Aleks, their Pauli web functionality is not yet ready to be overlaid on top of their 3D plots.
Hi, we have implemented the function calling draw_3d:
def pyzx_draw_positioned_zx_3d(
g: PositionedZX,
id_labels: bool = True,
pauli_web: PauliWeb[int, tuple[int, int]] | None = None,
) -> None:
"""Draw the positioned ZX graph in 3D with ``pyzx.draw_3d``.
Args:
g: The positioned ZX graph to draw.
id_labels: Whether to show the vertex id labels. Default is True.
pauli_web: The Pauli web to draw. Default is None.
"""
from pyzx import draw_3d
plot_g = g.g.clone()
for v in plot_g.vertices():
position = g.positions[v]
plot_g.set_qubit(v, position.x)
plot_g.set_row(v, position.y)
plot_g.set_vdata(v, "z", position.z)
draw_3d(plot_g, labels=id_labels, pauli_web=pauli_web)
But as you mentioned, it is not functioning correctly, at least for pyzx==0.9.0.
We can still use the previous plot functions implemented in tqec to draw a (positioned) ZX graph and correlation surface (pauli web):
from tqec.interop.pyzx.plot import plot_positioned_zx_graph, draw_correlation_surface_on
from tqec.gallery import cnot
cnot_block_graph = cnot()
correlation_surfaces = cnot_block_graph.find_correlation_surfaces()
positioned_zx = cnot_block_graph.to_zx_graph()
fig, ax = plot_positioned_zx_graph(positioned_zx)
draw_correlation_surface_on(correlation_surfaces[0], positioned_zx, ax)
This issue might be closed by #465, #466 or #467. If that is not the case, what is missing in order to close this issue?
Hi all, I am catching up on this interesting topic.
My understanding (from the conversation above) is that:
- previous implementations of ZX graphs (
ZXGraphin TQEC src) have been removed - we adopted
SGraphfrom PyZX (wrapping it into aPositionedZXdata structure) - we want to leverage the 3D visualization methods in PyZX, but they currently cannot visualize "observables / correlation surfaces"
- LaSsynth provides some helpful methods to convert a ZX graph to a 3D-pipe representation (TQEC
BlockGraph) - non-Clifford operations and Hadamard gates are not handled by LaSsynth
Please, correct me if the points above are not accurate.
Question: Are we developing in-house algorithms to make the ZX graph --> 3D-pipe map? Either without optimization or with space-time volume minimization.
Yes, we are definitely interested in developing in-house methods to convert a ZX graph to lattice surgery. Kabir and Yiming would be good people to reach out to and sync up with.
On Thu, Mar 13, 2025, 5:08 PM giangiac @.***> wrote:
Hi all, I am catching up on this interesting topic.
My understanding (from the conversation above) is that:
- previous implementations of ZX graphs (ZXGraph in TQEC src) have been removed
- we adopted SGraph from PyZX (wrapping it into a PositionedZX data structure)
- we want to leverage the 3D visualization methods in PyZX, but they currently cannot visualize "observables / correlation surfaces"
- LaSsynth provides some helpful methods to convert a ZX graph to a 3D-pipe representation (TQEC BlockGraph)
- non-Clifford operations and Hadamard gates are not handled by LaSsynth
Please, correct me if the points above are not accurate.
Question: Are we developing in-house algorithms to make the ZX graph --> 3D-pipe map? Either without optimization or with space-time volume minimization.
— Reply to this email directly, view it on GitHub https://github.com/tqec/tqec/issues/449#issuecomment-2722968030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKAXTBYSO73GEYKX34SHG32UIMZRAVCNFSM6AAAAABVLCQKGOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRSHE3DQMBTGA . You are receiving this because you are subscribed to this thread.Message ID: @.***> [image: giangiac]giangiac left a comment (tqec/tqec#449) https://github.com/tqec/tqec/issues/449#issuecomment-2722968030
Hi all, I am catching up on this interesting topic.
My understanding (from the conversation above) is that:
- previous implementations of ZX graphs (ZXGraph in TQEC src) have been removed
- we adopted SGraph from PyZX (wrapping it into a PositionedZX data structure)
- we want to leverage the 3D visualization methods in PyZX, but they currently cannot visualize "observables / correlation surfaces"
- LaSsynth provides some helpful methods to convert a ZX graph to a 3D-pipe representation (TQEC BlockGraph)
- non-Clifford operations and Hadamard gates are not handled by LaSsynth
Please, correct me if the points above are not accurate.
Question: Are we developing in-house algorithms to make the ZX graph --> 3D-pipe map? Either without optimization or with space-time volume minimization.
— Reply to this email directly, view it on GitHub https://github.com/tqec/tqec/issues/449#issuecomment-2722968030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKAXTBYSO73GEYKX34SHG32UIMZRAVCNFSM6AAAAABVLCQKGOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRSHE3DQMBTGA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
This issue might be closed by #465, #466 or #467. If that is not the case, what is missing in order to close this issue?
Thanks to Yiming's PRs the position geometry of PyZX graphs can be mapped to BlockGraphs. Some work is needed to parse a PyZX graph into a PositionedZX object.
I'm referring to these files: https://tqec.github.io/tqec/_modules/tqec/interop/pyzx/synthesis/strategy.html#block_synthesis
The positioned_block_synthesis method "requires specifying the 3D positions of each vertex explicitly in the ZX graph". We have a ZX graph, but no such a 3D labeling. I don't know if there's a way to do this that is induced by the PositionedZX object, perhaps @inmzhang can advise. Judging from @jbolns analysis in this earlier comment, it definitely seems feasible.
On a separate note, Yiming's LaSsynth experiments were able to produce a Steane code BlockGraph, but there's a lot of variation in the expected output. If we do not find a way to automatically label 2D ZX graphs with 3D positions, then creating an interface to load the subroutine in Yiming's script also seems like a viable option. Sustainably integrating LasSynth beyond this MVP would require more refactoring.
we want to leverage the 3D visualization methods in PyZX, but they currently cannot visualize "observables / correlation surfaces"
As I understand, we're not interested in using PyZX's methods for 3D visualization; rather, we want to modify them to give position coordinates for BlockGraphs. All your other bullet points seem correct.
Hi everyone,
With #465 and #466, we have removed the ZXGraph data structure from the tqec codebase and introduced the interop/pyzx module to enable basic interoperability with PyZX. Specifically:
- We defined the concept/interface of "block synthesis," which involves laying out the ZX graph in 3D and synthesizing it into a valid
BlockGraph. - Currently, only the simplest synthesis method is implemented: assigning a 3D position to each ZX graph node and using a one-to-one mapping to convert it into a
BlockGraph. CorrelationSurfaceis now computed directly on apyzx.GraphS.- Some plotting methods are available for
pyzx.GraphS.
As you can see, block synthesis is not yet automated. This is something to consider over the next two weeks. In the code, you’ll need to define a new SynthesisStrategy:
https://github.com/tqec/tqec/blob/10ab09548e3aeb8f24a12b59ba5cfddb07751bc5/src/tqec/interop/pyzx/synthesis/strategy.py#L15-L18
Then implement the strategy using the following interface:
https://github.com/tqec/tqec/blob/10ab09548e3aeb8f24a12b59ba5cfddb07751bc5/src/tqec/interop/pyzx/synthesis/strategy.py#L31-L37
I can implement the LASSYNTH strategy next week, though technically, it synthesizes from the stabilizer flow rather than a "built ZX graph." It is far from optimal or ideal because we do not have a good way to assign the input/output ports before synthesis. More strategies—whether systematic or heuristic—will be needed, and anyone is welcome to contribute by developing their own algorithms.
Hi Gian,
we want to leverage the 3D visualization methods in PyZX, but they currently cannot visualize "observables / correlation surfaces"
I would say this is an optional integration but better to have.
LaSsynth provides some helpful methods to convert a ZX graph to a 3D-pipe representation
Actually it synthesis from the provided stabilizer flow rather than a ZX graph. You can surely compute the stabilizer flows from a built ZX graph then use LaSsyhth to synthesis into a BlockGraph.
non-Clifford operations and Hadamard gates are not handled by LaSsynth
Hadamard gates can be handled.
Question: Are we developing in-house algorithms to make the ZX graph --> 3D-pipe map? Either without optimization or with space-time volume minimization.
Yes, for the short-term goal like next two weeks, a solution even without spacetime optimization would be great to have.
Thanks for the prompt responses.
Concerning the development of in-house methods to convert a ZX graph to lattice surgery, is it reasonable to:
-
start from ZX graphs in a simplified form (for example by enforcing that they can be seen as "layered" in time and with each node having at most 2 "temporal" edges)
-
find the lattice surgery equivalent without spacetime optimization
-
[optional] provide ways (if not already in
PyZX) to convert arbitrary ZX graphs to this simplified form -
[optional] design a "better" simplified form that makes it easier to convert those ZX graphs to lattice surgery
-
[optional] extend mapping to arbitrary ZX graphs
-
[optional] improve mapping strategy to minimize spacetime volume
-
[optional] develop methods to directly manipulate lattice surgery and reduce its spacetime volume
Hi Kabir,
Some work is needed to parse a
PyZXgraph into aPositionedZXobject.
Converting a pyzx.GraphS into a BlockGraph is exactly the target of "block_synthesis".
The
positioned_block_synthesismethod "requires specifying the 3D positions of each vertex explicitly in the ZX graph". We have a ZX graph, but no such a 3D labeling. I don't know if there's a way to do this that is induced by thePositionedZXobject, perhaps @inmzhang can advise. Judging from @jbolns analysis in this earlier comment, it definitely seems feasible.
positioned_block_synthesis is not expected to be used in the future because it is a layout by hand strategy, and we need automatic algorithm.
The bit that you're missing is what I think I have already done partially and imperfectly. @inmzhang @KabirDubey @afowler
Necessary background. A mistake I made with the visualisation interface was to try fitting all features below into a few afternoons of coding and focusing the presentation on the last step rather than the full process.
- Extraction of PyZX graphs into 3D-friendly format
- A website-based interface for graphical editing
- Importing of (1) into into (2)
- Visualisation of (3)
- User controls.
Step 1, in particular, seemed fairly attainable by approaching it in an ETL (extract, transform, load) fashion.
Suggested ETL approach. *Extract. *
- Dump 2D PyZX coordinates into a JSON (achieved: PyZX gives this graciously).
- Add placeholders for 3D coordinates (achieved, easy).
- While at it, it is possible to "explode" the 2D structure by expanding the original PyZX space by expandin the space between nodes in axis x and y and move some nodes along axis z (achieved, see top image below).
Transform
- Go row by row shifting nodes along the y-axis to eliminate gaps between nodes (achieved, see bottom image below – with some minor pending challenges).
- Rotate any nodes needing rotation (pending, but I think I can do it using a similar rationale as in previous step).
- Introduce pipes in the places still missing connections (pending, bit harder but seemingly achievable).
- Deal with bends (haven't thought about this – this will be the hard part)
Load
- The final structure should in theory be easy to traverse.
Challenges The biggest challenge is that I can't think of an algorithm that will give perfect results 100% of the time. The really difficult part, i.e., connections across rows of qubits, gets harder as distance grows.
Some form of manual editing seems needed.
Viability If we can use this manual process now available to edit whatever imperfections the ETL approach above might leave, then this seems pragmatic because it's no longer an all or nothing gamble but a question of how close to perfection we can get the automated part of the process.
Images
(Completely automated generation) (Imported and visualised into Astro - ThreeJS website)
(Do keep in mind the last image is only step 1 of the transform process. It is NOT the final intended objective.)
J, thanks for writing this up. Some comments on your approach:
- The "Extract" phase you proposed does not distinguish different spider types in the ZX diagram, which can be a problem because the spider type determines the local blocks in the
BlockGraph: - You layout the nodes following the node orders in the ZX diagram as the initial design. In my thoughts, the 2D coordinates provided by the circuit-form ZX diagram is not important or helpful for the problem, because only the connectivity matters in a ZX graph. A ZX graph can even includes no
rowandcolattributes. And the block alignments in the finalBlockGraphcan be very different from the node order in the original ZX diagram.
Yeah, you're, as usual, on point @inmzhang. I don't think I have an optimal answer, but it is worth considering the following.
Point 2. The extract stage gives you a JSON that no longer has rows, cols but x, y, z positions in the format needed by PositionedZX. That said, yes, the extract stage honours PyZX data by maintaining relative positions, proportions, number, and even type of nodes. It is, indeed, an extraction job. In my mind, any additional optimisation is a "problem for the future".
Point 1.A. Sounds like a transformation challenge.
- Infer from rotation in transform 2 / add rotation_x, _y, and _z parameters to PositionedZX?
- Transform 4?
- Manual editing of only a few nodes (as opposed to the current situation where everything needs manual input)?
Point 1.B. Unless...
- If there's a PyZX export containing this info (I initially chose PyZX's JSON export because it seemed the most complete but maybe I missed something), the extract stage can indeed incorporate it (if there is none, wouldn't this be a problem in any approach?).
- If we have a function, class, or whatevz somewhere in TQEC that can go over a PyZX graph, infer this prior export, and write it in the extra fields noted in my previous comment, then, if I recall correctly, PyZX would include this in its JSON export (which would make it available to all steps in the ETL process I described).
@afowler I am looking at ZX graphs like the one in slide 13:
For the in-house methods to do the mapping is it reasonable to assume (possibly as a starting point):
- we have a sort of "circuit" structure with horizontal lines corresponding to logical qubits and vertical lines to "operations"
- even without that interpretation, it is still possible to represent the ZX graph only with horizontal and vertical edges.
- every node can be connected with at most 3 other nodes, two horizontally and one vertically (if required, we can include up to 4 links per node, two vertical and two horizontal)
- no diagonal links
In this case I may have a strategy despite a really inefficient one.
Not sure if I can code it by Wed, but can surely discuss it.
Also, not a required condition, but a curiosity:
- all vertical links connect nodes of different color. Can this be assumed without loss of generality? I think this comes from the decision of splitting nodes with multiple links horizontally instead of vertically.