pytorch_geometric icon indicating copy to clipboard operation
pytorch_geometric copied to clipboard

[Fix] Improve dim_size handling in SetTransformerAggregation to prevent CUDA crash

Open KAVYANSHTYAGI opened this issue 8 months ago • 0 comments

This PR improves the robustness of SetTransformerAggregation by:

  • Automatically setting dim_size = index.max() + 1 if dim_size is not provided.
  • Raising a clear error if index.max() >= dim_size to avoid CUDA crashes during evaluation.

This is helpful especially for datasets like PPI where data.batch may be missing. It replaces hard-to-debug GPU errors with clear and early validation.

KAVYANSHTYAGI avatar Apr 23 '25 10:04 KAVYANSHTYAGI