GeometricFlux.jl
GeometricFlux.jl copied to clipboard
Implementation of EGNN
@emsal0 is working on this issue.
My current design is in my branch; https://github.com/FluxML/GeometricFlux.jl/compare/master...emsal0:EGNN?expand=1. Tests are still failing so I haven't made a PR yet, and implementation not yet complete.
The input data to an EGNN layer is a graph where each node contains:
- node features (called h in the paper), of dimension
in_dim
- positional encoding (called x in the paper), which consists of the rest of the vector, of dimension
x_dim
Thus the input should come in the form of a vector of size in_dim
+ x_dim
In the paper an EGNN layer induces three neural networks:
- phi_e, going from
in_dim
to an arbitrary intermediate dimensionint_dim
(standing for intermediate dimension) - phi_x, going from
int_dim
to dimension 1 - phi_h, going from dimension (
in_dim
+int_dim
) toout_dim
The layer should be constructed with either:
- Arguments
(in_dim, int_dim, out_dim)
a tuple of ints specifying the feature input dimension in_dim (dimension ofh
), int_dim (arbitrary, output dimension of thephi_x
function in the paper, and out_dim. This would by default construct one-layer MLPs for phi_e, phi_x, and phi_h - Arguments
(nn_e, nn_x, nn_h)
, each of them functions (should be MLPs) with appropriate dimensions as specified by phi_e, phi_x, phi_h above. Maybe it would be good to validate the input to these so that they are necessarilyFlux.Chain
objects that are only populated byFlux.Dense
at each layer, but I am unsure if that's the right decision to make. (Thus this path isn't implemented yet)