qml
qml copied to clipboard
[BUG] Quanvolution demo results don't match rendered version
First noted by a user here, and confirmed independently by me using a fresh environment with the most recent version of PennyLane and contents of the qml
repo requirements file.
The plots produced at the end (left) don't match the results in the demo (right). The user obtained identical plots to my locally run version (given this, and the consistency of the results, the issue doesn't appear to be one that stems from random seeds or anything).
My first guess was that this was a non-executed demo, but it looks like it is executed 🤔
My first guess was that this was a non-executed demo, but it looks like it is executed thinking
Yes, same here. Do you get different results as well when you run it locally?
Might not get a chance currently! Will pencil it in as a todo. I wonder if this is related to @co9olguy's numerical accuracy issue. @glassnotes, if you run the PL test suite locally, does everything pass?
It actually does not - I get a few failures (noted below), but nothing I would expect to affect the results of the demo execution.
FAILED tests/test_about.py::test_about - AssertionError: assert '0.16.0.dev' in '0.15.0.dev0'
FAILED tests/circuit_graph/test_qasm.py::TestQASMConformanceTests::test_agrees_qiskit_plugin - pkg_resources.VersionConflict: (Penn...
FAILED tests/circuit_graph/test_qasm.py::TestQASMConformanceTests::test_basis_state_agrees_qiskit_plugin - pkg_resources.VersionCon...
FAILED tests/math/test_functions.py::test_conj[t5] - TypeError: loop of ufunc does not support argument 0 of type ResourceVariable ...
Hey @glassnotes, is this still showing the incorrect result?
Just ran locally, yes it does. The results in this graph are consistent with the previous version I obtained running locally, which is still inconsistent with the live demo.
😬 😬 😬
It's super weird... I again used a fresh environment, installing directly from the repo requirements file. Maybe it's because I used Python 3.8? What version does the CI use? Do you observe this locally too?
Looks like the CI is using Python 3.7.2.
If the demo doesn't set a seed, it could be due to the CI running all the demos, resulting in a 'better' seed when this demo is run
This demo actually does set seeds though, that's the weird thing. For both numpy and tensorflow :confused:
It seems that this problem has been solved. I have run the notebook in Google Colab and this is my output
In the demo seems the same:
@KetpuntoG nice! Although I suppose the mystery is not solved, as your results show classical training beating the quantum training, which isn't what the demo originally showed
seems very dependent on the seed. If we want something that quantum "wins", we can put seed 42 although I'm not a big fan of these things
Yeah, I would say this is maybe more an issue with the conclusion of the demo, rather than a seed issue