Update support for RAPIDS
From @charlesvardeman:
So @ceteri, I think that you are correct on the RAPIDS release selector. We have RAPIDS installed on a development node of our gpu cluster using the following selector
conda create -n rapids-21.12 -c rapidsai -c nvidia -c conda-forge \
cudf=21.12 cuml=21.12 cugraph=21.12 python=3.8 cudatoolkit=11.2
Running the example from the tutorial:
import kglab
namespaces = {
"wtm": "http://purl.org/heals/food/",
"ind": "http://purl.org/heals/ingredient/",
"skos": "http://www.w3.org/2004/02/skos/core#",
}
kg = kglab.KnowledgeGraph(
name = "A recipe KG example based on Food.com",
base_uri = "https://www.food.com/recipe/",
namespaces = namespaces,
)
produces a similar error message to what @fils was seeing.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_2396895/1517367763.py in <module>
----> 1 kg.describe_ns()
/opt/anaconda3/envs/rapids-21.12/lib/python3.8/site-packages/kglab/kglab.py in describe_ns(self)
254
255 if self.use_gpus:
--> 256 df = cudf.DataFrame(rows_list, columns=col_names)
257 else:
258 df = pd.DataFrame(rows_list, columns=col_names)
/opt/anaconda3/envs/rapids-21.12/lib/python3.8/contextlib.py in inner(*args, **kwds)
73 def inner(*args, **kwds):
74 with self._recreate_cm():
---> 75 return func(*args, **kwds)
76 return inner
77
/opt/anaconda3/envs/rapids-21.12/lib/python3.8/site-packages/cudf/core/dataframe.py in __init__(self, data, index, columns, dtype)
610 )
611 else:
--> 612 self._init_from_list_like(
613 data, index=index, columns=columns
614 )
/opt/anaconda3/envs/rapids-21.12/lib/python3.8/site-packages/cudf/core/dataframe.py in _init_from_list_like(self, data, index, columns)
750 if columns is not None:
751 if len(columns) != len(data):
--> 752 raise ValueError(
753 f"Shape of passed values is ({len(index)}, {len(data)}), "
754 f"indices imply ({len(index)}, {len(columns)})."
ValueError: Shape of passed values is (31, 31), indices imply (31, 2).
The machine details are:
Wed Feb 16 14:53:35 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.94 Driver Version: 470.94 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 6000 Off | 00000000:00:09.0 Off | 0 |
| N/A 18C P8 13W / 250W | 3MiB / 22698MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Quadro RTX 6000 Off | 00000000:00:0A.0 Off | 0 |
| N/A 21C P8 13W / 250W | 3MiB / 22698MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Quadro RTX 6000 Off | 00000000:00:0B.0 Off | 0 |
| N/A 21C P8 13W / 250W | 3MiB / 22698MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Quadro RTX 6000 Off | 00000000:00:0C.0 Off | 0 |
| N/A 20C P8 12W / 250W | 3MiB / 22698MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The node is running Red Hat Enterprise Linux release 8.5 (Ootpa), Python 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]
I'm working with colleagues at NVIDIA and also other partners for better integration/testing/support of GPU-acceleration in kglab. This is an issue where we can collect what's needed for priorities in that work.
If we can be of help in testing/debugging this issue on our cluster at ND, just ping me.