MrMustard
MrMustard copied to clipboard
Opt contraction
Context: At the moment a user has to figure out the optimal path to contract a circuit. This PR introduces a utility that computes the optimal path automatically and allows for custom heuristics.
Description of the Change:
Benefits:
Possible Drawbacks:
Related GitHub Issues:
Codecov Report
Attention: Patch coverage is 94.77912%
with 13 lines
in your changes missing coverage. Please review.
Project coverage is 88.03%. Comparing base (
8946ea5
) to head (ba98716
).
Additional details and impacted files
@@ Coverage Diff @@
## develop #397 +/- ##
===========================================
+ Coverage 87.95% 88.03% +0.07%
===========================================
Files 81 80 -1
Lines 6310 6443 +133
===========================================
+ Hits 5550 5672 +122
- Misses 760 771 +11
Files | Coverage Δ | |
---|---|---|
mrmustard/lab/abstract/state.py | 92.66% <ø> (ø) |
|
mrmustard/lab/circuit_drawer.py | 92.15% <ø> (ø) |
|
mrmustard/lab_dev/circuit_components_utils.py | 100.00% <100.00%> (ø) |
|
...mustard/lab_dev/transformations/transformations.py | 100.00% <100.00%> (ø) |
|
mrmustard/lab_dev/wires.py | 98.27% <100.00%> (-0.82%) |
:arrow_down: |
mrmustard/math/backend_numpy.py | 100.00% <ø> (ø) |
|
mrmustard/math/backend_tensorflow.py | 100.00% <ø> (ø) |
|
mrmustard/math/lattice/strategies/vanilla.py | 100.00% <100.00%> (ø) |
|
mrmustard/math/parameters.py | 95.45% <ø> (ø) |
|
mrmustard/physics/ansatze.py | 96.71% <100.00%> (+0.03%) |
:arrow_up: |
... and 8 more |
Continue to review full report in Codecov by Sentry.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 8946ea5...ba98716. Read the comment docs.
This feels like I'm just not getting it, but should .auto_shape()
have the side-effect of setting the fock_shape attribute? In my basic test below, I just get [None]
:
import mrmustard as mm
from mrmustard.lab_dev import *
cc = Number([0], 4) >> Sgate([0], 1)
cc.auto_shape()
print(cc.fock_shape)
It seems that .auto_shape()
in this instance returns (100,)
. Is this right?
nice find @aplund! looks like for DM and Ket implementations of auto_shape()
, we only update the object's fock_shape
if it has a Bargmann representation. This is a good example of my above hesitation with side effects (weirdness here is that they are only sometimes applied) and why I'd prefer to avoid it
This feels like I'm just not getting it, but should
.auto_shape()
have the side-effect of setting the fock_shape attribute? In my basic test below, I just get[None]
:import mrmustard as mm from mrmustard.lab_dev import * cc = Number([0], 4) >> Sgate([0], 1) cc.auto_shape() print(cc.fock_shape)
It seems that
.auto_shape()
in this instance returns(100,)
. Is this right?
Yep that was a bug! CircuitComponent
was initializing _fock_shape
in the wrong way. It should be fixed now.
Again, I may be "doing it wrong(TM)", but I was expecting this to change the shape as the squeezing increases. Am I missing a step somewhere:
import numpy as np
import mrmustard as mm
from mrmustard.lab_dev import *
for r in np.linspace(0.0, 3.0, 31):
cc = Number([0], 0) >> Sgate([0], r)
cc.auto_shape()
f = cc.fock()
tail = np.sqrt(np.sum(np.abs(f[..., -10:])**2))
print(f"r={r:0.2f}, shape={cc.fock_shape}, tail={tail}")
Here's my output:
r=0.00, shape=[50], tail=0.0
r=0.10, shape=[50], tail=3.320813549783774e-21
r=0.20, shape=[50], tail=2.877796850159443e-15
r=0.30, shape=[50], tail=7.005423842389163e-12
r=0.40, shape=[50], tail=1.4442895820803e-09
r=0.50, shape=[50], tail=7.391845079536454e-08
r=0.60, shape=[50], tail=1.527182338092643e-06
r=0.70, shape=[50], tail=1.6608089924466752e-05
r=0.80, shape=[50], tail=0.0001119152559711487
r=0.90, shape=[50], tail=0.0005210655981549683
r=1.00, shape=[50], tail=0.0018108286711155058
r=1.10, shape=[50], tail=0.004974852404081372
r=1.20, shape=[50], tail=0.011292920255741735
r=1.30, shape=[50], tail=0.02193479266151498
r=1.40, shape=[50], tail=0.03749458889213709
r=1.50, shape=[50], tail=0.057708136029986395
r=1.60, shape=[50], tail=0.08148204007111474
r=1.70, shape=[50], tail=0.10718002773885009
r=1.80, shape=[50], tail=0.1330086283344037
r=1.90, shape=[50], tail=0.15735305275690456
r=2.00, shape=[50], tail=0.178984595152066
r=2.10, shape=[50], tail=0.1971322790005048
r=2.20, shape=[50], tail=0.2114532602089405
r=2.30, shape=[50], tail=0.22194764197883732
r=2.40, shape=[50], tail=0.22885591269080502
r=2.50, shape=[50], tail=0.23256342153828174
r=2.60, shape=[50], tail=0.23352358559759126
r=2.70, shape=[50], tail=0.23220268304523672
r=2.80, shape=[50], tail=0.22904418030721152
r=2.90, shape=[50], tail=0.2244485284290604
r=3.00, shape=[50], tail=0.21876408762339897
Should the shape not increase as r
increases?
Again, I may be "doing it wrong(TM)", but I was expecting this to change the shape as the squeezing increases. Am I missing a step somewhere: [...] Should the shape not increase as
r
increases?
@aplund I think you wanted this:
for r in np.linspace(0.0, 3.0, 31):
cc = Vacuum([0]) >> Sgate([0], r)
cc.auto_shape()
print(f"r={r:0.2f}, shape={cc.fock_shape}") # works
The issue with your code was that Number
has a Fock
representation, so the Sgate
needs to go to Fock itself and the best it can do is to match the shape (of 1) on the vacuum side and on the output side it has no way to determine how far to go and so it defaults to the value in the settings. On the other hand, Vacuum
has a Bargmann
representation so we can play all the tricks in our book.
The issue with your code was that
Number
has aFock
representation, so theSgate
needs to go to Fock itself and the best it can do is to match the shape (of 1) on the vacuum side and on the output side it has no way to determine how far to go and so it defaults to the value in the settings. On the other hand,Vacuum
has aBargmann
representation so we can play all the tricks in our book.
So there's an implicit difference between Number([0], 0)
and Vacuum([0])
that makes them not interchangeable? If so, this should probably be bolded and italicised somewhere, as I think most people would consider them the same.
When I run the code you suggested, I get the output:
r=0.00, shape=[None]
r=0.10, shape=[None]
r=0.20, shape=[None]
r=0.30, shape=[None]
r=0.40, shape=[None]
r=0.50, shape=[None]
r=0.60, shape=[None]
r=0.70, shape=[None]
r=0.80, shape=[None]
r=0.90, shape=[None]
r=1.00, shape=[None]
r=1.10, shape=[None]
r=1.20, shape=[None]
r=1.30, shape=[None]
r=1.40, shape=[None]
r=1.50, shape=[None]
r=1.60, shape=[None]
r=1.70, shape=[None]
r=1.80, shape=[None]
r=1.90, shape=[None]
r=2.00, shape=[None]
r=2.10, shape=[None]
r=2.20, shape=[None]
r=2.30, shape=[None]
r=2.40, shape=[None]
r=2.50, shape=[None]
r=2.60, shape=[None]
r=2.70, shape=[None]
r=2.80, shape=[None]
r=2.90, shape=[None]
r=3.00, shape=[None]
Is this the expected behaviour?
So there's an implicit difference between
Number([0], 0)
andVacuum([0])
that makes them not interchangeable? If so, this should probably be bolded and italicised somewhere, as I think most people would consider them the same.
Sure, we can make it clearer in the docstrings.
When I run the code you suggested, I get the output:
r=0.00, shape=[None] ...
Is this the expected behaviour?
Yes, now it is. I have removed the lines where fock_shape
is updated by auto_shape
, which takes about 100us per mode to run, so it's okay if we run it multiple times.
With this change the number in settings.AUTOCUTOFF_MIN_CUTOFF doesn't seem to be respected any longer. Is that intended?
Hi Jacob. There are two settings that can be modified instead. AUTOSHAPE_PROBABILITY allows you to set a minimum norm that you want the autoshape to preserve for states. AUTOSHAPE_MAX is the max shape you want Fock objects to go to (so it won't go higher than this for states, and it will automatically go to this for things that are not states).
From https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/utils/settings.py:
`
self.AUTOSHAPE_PROBABILITY = 0.999
r"""The minimum l2_norm to reach before automatically stopping the Bargmann-to-Fock conversion. Default is 0.999."""
self.AUTOCUTOFF_MAX_CUTOFF = 100 # TODO: remove in MM 1.0
r"""The maximum value for autocutoff. Default is ``100``."""
self.AUTOCUTOFF_MIN_CUTOFF = 1 # TODO: remove in MM 1.0
r"""The minimum value for autocutoff. Default is ``1``."""
self.AUTOSHAPE_MAX = 50
r"""The max shape for the autoshape. Default is ``50``."""`
So AUTOCUTOFF_MIN_CUTOFF does nothing then?
Not in lab_dev, as far as I know. It is still there for lab.
Should we maybe have self.AUTOSHAPE_PROBABILITY default to 1 instead of 0.999? Or maybe think about some relative way of doing things when Fock measurements on Gaussian states are involved, e.g., in my example above the default behavior doesn't seem desired
I.e. in my example about I get 0% of the correct output state, whereas I might have expected to get at least 0.999 of it