qml icon indicating copy to clipboard operation
qml copied to clipboard

[BUG] Quanvolution demo results don't match rendered version

Open glassnotes opened this issue 3 years ago • 14 comments

First noted by a user here, and confirmed independently by me using a fresh environment with the most recent version of PennyLane and contents of the qml repo requirements file.

The plots produced at the end (left) don't match the results in the demo (right). The user obtained identical plots to my locally run version (given this, and the consistency of the results, the issue doesn't appear to be one that stems from random seeds or anything).

image

glassnotes avatar Apr 21 '21 15:04 glassnotes

My first guess was that this was a non-executed demo, but it looks like it is executed 🤔

josh146 avatar Apr 21 '21 15:04 josh146

My first guess was that this was a non-executed demo, but it looks like it is executed thinking

Yes, same here. Do you get different results as well when you run it locally?

glassnotes avatar Apr 21 '21 15:04 glassnotes

Might not get a chance currently! Will pencil it in as a todo. I wonder if this is related to @co9olguy's numerical accuracy issue. @glassnotes, if you run the PL test suite locally, does everything pass?

josh146 avatar Apr 21 '21 15:04 josh146

It actually does not - I get a few failures (noted below), but nothing I would expect to affect the results of the demo execution.

FAILED tests/test_about.py::test_about - AssertionError: assert '0.16.0.dev' in '0.15.0.dev0'
FAILED tests/circuit_graph/test_qasm.py::TestQASMConformanceTests::test_agrees_qiskit_plugin - pkg_resources.VersionConflict: (Penn...
FAILED tests/circuit_graph/test_qasm.py::TestQASMConformanceTests::test_basis_state_agrees_qiskit_plugin - pkg_resources.VersionCon...
FAILED tests/math/test_functions.py::test_conj[t5] - TypeError: loop of ufunc does not support argument 0 of type ResourceVariable ...

glassnotes avatar Apr 21 '21 15:04 glassnotes

Hey @glassnotes, is this still showing the incorrect result?

josh146 avatar Oct 14 '21 09:10 josh146

Just ran locally, yes it does. The results in this graph are consistent with the previous version I obtained running locally, which is still inconsistent with the live demo.

image

glassnotes avatar Oct 14 '21 12:10 glassnotes

😬 😬 😬

josh146 avatar Oct 14 '21 12:10 josh146

It's super weird... I again used a fresh environment, installing directly from the repo requirements file. Maybe it's because I used Python 3.8? What version does the CI use? Do you observe this locally too?

glassnotes avatar Oct 14 '21 12:10 glassnotes

Looks like the CI is using Python 3.7.2.

If the demo doesn't set a seed, it could be due to the CI running all the demos, resulting in a 'better' seed when this demo is run

josh146 avatar Oct 14 '21 12:10 josh146

This demo actually does set seeds though, that's the weird thing. For both numpy and tensorflow :confused:

glassnotes avatar Oct 14 '21 13:10 glassnotes

It seems that this problem has been solved. I have run the notebook in Google Colab and this is my output Captura de Pantalla 2022-11-24 a las 15 28 06

In the demo seems the same:

Captura de Pantalla 2022-11-24 a las 15 28 26

KetpuntoG avatar Nov 24 '22 20:11 KetpuntoG

@KetpuntoG nice! Although I suppose the mystery is not solved, as your results show classical training beating the quantum training, which isn't what the demo originally showed

josh146 avatar Nov 24 '22 22:11 josh146

seems very dependent on the seed. If we want something that quantum "wins", we can put seed 42 although I'm not a big fan of these things

Captura de Pantalla 2022-11-24 a las 17 20 06

KetpuntoG avatar Nov 24 '22 22:11 KetpuntoG

Yeah, I would say this is maybe more an issue with the conclusion of the demo, rather than a seed issue

josh146 avatar Nov 24 '22 22:11 josh146