pygraphistry
pygraphistry copied to clipboard
[BUG] While running g.build_gnn(...), it gets error of <from_scipy>
Describe the bug While running g.build_gnn(...), it fails with <AttributeError: 'NoneType' object has no attribute 'from_scipy'>
To Reproduce Code, including data, than can be run without editing:
Expected behavior What should have happened
Actual behavior What did happen
Screenshots If applicable, any screenshots to help explain the issue
Graphistry GPU server environment
- Where run [e.g.,Graphistry Hub, Colab]
PyGraphistry API client environment
- Where run [e.g., Graphistry 2.35.9 Jupyter]
- Version [e.g. 0.14.0, print via
graphistry.__version__] - Python Version [e.g. Python 3.7.7]
Additional context graphistry.version(dev/dev-skrub branch) 0.35.4+66.g9a3a8864
Packages Installed with Graphistry:
dgl-2.1.0 faiss-cpu-1.9.0.post1 graphistry-0.35.4+66.g9a3a8864 palettable-3.3.3 pynndescent-0.5.13 skrub-0.4.1 squarify-0.4.4 torchdata-0.10.1
The error could be reproduced with CyberSecurity-Slim.ipynb/Ask-HackerNews-Demo.ipynb notebooks
Do you have dgl installed?
Yes, <dgl-2.1.0)> appears in the list of installed packages and in an output of
Successfully installed: #dgl-2.1.0 faiss-cpu-1.9.0.post1 graphistry-0.35.4+66.g9a3a8864 palettable-3.3.3 pynndescent-0.5.13 skrub-0.4.1 squarify-0.4.4 torchdata-0.10.1 umap-learn-0.5.7
Can you try to import it directly? Somehow our lazy import is failing, returning None...
Yeah, it seems the lazy import was done conditionally, when it's imported directly it worked. Thank you
Now, it fails with torchdata. It seems it's a chain of dependencies and their APIs change from version to version :). I will try to fix it. Thank you again!
After degrading dgl and using dev-skrub branch of pygraphistry, it seems the problem could be "solved". It also solves the problem of "lazy" import of dgl . Thank you.
- are you using a gpu, and if so, what version of cuda compute (11.8, 12.4, ...)?
- what version of pytorch, pandas, numpy, scipy?
Internally, we're working through some updated GPU version alignment, where one of our experimental builds (not confirmed working in practice, just on paper according to libs):
- cuda 11.8 (w/ expectation of cuda 12.4 also working w/ below, esp if via 11.8 docker containers)
- torch 2.4.1
- dgl 2.4.0.th24.cu118
- spacy 3.8
- sent trans 3.3
I'd expect CPU runs of those to work as well
Again, we haven't confirmed our full test suite here yet -- this is slated to all land in our early/mid feb enterprise release, and while umap and featurize largely working, we're still going through the gnn parts and some more edge cases on gpu umap/featurize
For now, we run it on CPU. pandas 2.2.2 torch 2.5.1+cu121 dgl 1.1.3(degraded) spacy 3.7.5
Thank you.