Fixing one of the `captum` tests.
The current PR provides the fix for the following 9 failing tests.
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.node-MaskType.object-MaskType.attributes-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.node-MaskType.object-None-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.node-None-MaskType.attributes-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.edge-MaskType.object-MaskType.attributes-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.edge-MaskType.object-None-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.edge-None-MaskType.attributes-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.graph-MaskType.object-MaskType.attributes-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.graph-MaskType.object-None-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1]' is invalid for input of size 3
FAILED test/explain/algorithm/test_captum_explainer.py::test_captum_explainer_multiclass_classification[index1-ModelTaskLevel.graph-None-MaskType.attributes-ShapleyValueSampling] - RuntimeError: shape '[-1, 2, 1, 1]' is invalid for input of size 3
NOTE: This is a resubmission of PR#9513, which had some issues with my captum branch.
By my comment in https://github.com/pyg-team/pytorch_geometric/pull/9513, I meant why does our CI is still green even without this fix? I am a bit hesitant to merge this without further understanding.
Our CI is green only because the 187 tests from the unit tests require the captum package to be installed and it is not included in our PYG container. This problem shows up when we pip install captum and run unit tests
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 87.38%. Comparing base (
3f4f1a0) to head (69edf78). Report is 3 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #9549 +/- ##
==========================================
+ Coverage 86.96% 87.38% +0.41%
==========================================
Files 468 482 +14
Lines 30918 31396 +478
==========================================
+ Hits 26887 27434 +547
+ Misses 4031 3962 -69
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@akihironitta @rusty1s anything else needed to merge?
I still have the same question as https://github.com/pyg-team/pytorch_geometric/pull/9549#issuecomment-2257520464. Why do we need to skip the test in such cases? I might be misunderstanding, but I do see the test cases pass in master:
https://github.com/pyg-team/pytorch_geometric/actions/runs/10916480630/job/30297994357
@akihironitta: Thank you for the link showing that the test_captum_explainer.py::test_captum_explainer_multiclass_classification test passes in the master branch with all combinations of parameters. From the log file, I noticed that your CI is using the captum version 0.6.
Collecting captum<0.7.0 (from torch-geometric==2.7.0)
Using cached captum-0.6.0-py3-none-any.whl.metadata (25 kB)
In my test, version 0.7 of captum (which, I believe is the latest one) was installed by pip install captum command.
Since this test runs correctly in our container when pip install captum==0.6 is used, I believe you can close the current PR without merging.
Thanks a lot for confirming this! Since we already set the upper bound at:
https://github.com/pyg-team/pytorch_geometric/blob/642f831db4d785b105683973d857385015fee866/pyproject.toml#L75
I don't think this change is necessary (although it'd be nice to add support for captum>=0.7.0, but it's a separate issue).