yfismine
yfismine
## 🐛 Bug ## To Reproduce According to my understanding, this may be a potential bug in the distributed dgl. There is such a code for neighbor sampling in the...
## 🐛 Bug ## To Reproduce Steps to reproduce the behavior: ```python import dgl from openhgnn.dataset.gtn_dataset import ACM4GTNDataset graph = dgl.to_block(ACM4GTNDataset()[0]) print(graph) print(graph.dstdata["label"]) graph.dstdata["label_copy"] = graph.dstdata["label"] print(graph.dstdata["label_copy"]) print(graph.srcdata["label_copy"]) graph.dstnodes["paper"].data["label_copy"] =...
## 🐛 Bug ## To Reproduce Steps to reproduce the behavior: ```shell conda install pytorch=2.3.0 cpuonly torchmetrics=1.4.0 -c pytorch -c conda-forge -y conda install dgl=2.2.1 -c dglteam/label/th23_cpu -y python -c...
## 🚀 Feature *mod_args.get(etype,()) and **mod_kwargs.get(etype,{}) in HeteroGraphConv's forward need to be changed to *mod_args.get((stype, etype, dtype),()) and **mod_kwargs.get((stype, etype, dtype),{}) ## Motivation For heterogeneous graphs, the parameters of a...
## 🐛 Bug When training a GraphSAGE model using mini-batches on a CPU with a DGL graph of idtype=i32, the execution is interrupted abnormally during loss.backward(). However, switching to idtype=i64...