GNN_for_EHR
GNN_for_EHR copied to clipboard
Possible small bug in code?
Hi,
Thank you for making the code public. It's really nice to see!
I think the following line here https://github.com/NYUMedML/GNN_for_EHR/blob/65cd2102982d048ff6dab4f1ea40f0895ed3f091/model.py#L59
should be
if concat:
self.norm = LayerNorm(hidden_features * num_heads)
Also, apologies if I have missed this but if the output of the multi-headed attention is concatenated, shouldn't this be reflected in the size of the shared weights W
at successive layers? Currently it is constant at d_in x d
but it should be dK x d
at intermediate layers.
https://github.com/NYUMedML/GNN_for_EHR/blob/65cd2102982d048ff6dab4f1ea40f0895ed3f091/model.py#L46