gloriatao
Results
2
comments of
gloriatao
i have the same problem, the current setting only allows batch size=1.
Got it! use torch.matmul instead of torch.mm in class GraphAttentionLayer