Matthias Fey
Matthias Fey
Are you referring to a matrix multiplication (since your `weight` seems to be 2-dimensional). In that case, you may want to take a look at https://pyg-lib.readthedocs.io/en/latest/modules/ops.html.
Sorry for the late reply, but why does ``` scatter_add(weight * src, index, dim) ``` not work for you?
Ah, I get your point. Yes, one could fuse this together. This is a great optimization.
Can you take a look at `torch_geometric.utils.to_dense_batch` from PyG? This might be what you are looking for (but expects sorted indices).
Can you confirm that your `index` tensor is of type `torch.long` via `index = index.long()`?
Thanks for this issue. I understand that this works for `Cython`, but I am unsure what exactly happens in the PyTorch case? For example, given the `requirement.txt`: ``` torch==1.10.0+cu102 torch-scatter...
I understand that. I think most people are installing from wheel anyway, which works just fine in a single `requirement.txt` file or via `conda`. For users of `pip install torch-scatter`...
Currently no. I think one can implement it via alias sampling.
`torch.ops.torch_cluster.random_walk` calls the underlying CPU/CUDA implementation in `csrc/cpu/rw_cpu.cpp` and `csrc/cuda/rw_cuda.cu`. These implementations cannot currently handle custom transition probabilities.
I see what I can do :)