Gabrijel Boduljak
Gabrijel Boduljak
> @gboduljak Thank you this is a lot of great info! Will try to catch myself up and help :-) Thank you :) Here are some concrete things were I...
> @gboduljak No problem, but after looking it over, not sure I can be extraordinarily helpful beyond some simpler tasks. This is a lot lower level coding than I'm used...
> Yea that ones on our list of examples to add! Are you interested in contributing it? If so which model would you use? I would like to contribute :)...
@nkasmanoff Thanks for the help. I will take a look at your work now.
@nkasmanoff I merged your PR, corrected the nits and I refactored your implementation so that everything is in ``preprocessing`` folder. Many thanks for the help. In future, we might drop...
I would like to add that an efficient MPNN layer implementation also depends on ``scatter_{op}`` and sparse linear algebra support. In addition, popular graph attention layers such as GAT and...
I would like to contribute to implementation of MPNN layers, graph normalization layers, and graph utilities, such as graph (mini) batching.
> It would probably be much easier for maintenance to have all GNN-related code within the separate `mlx-graphs` lib (MPNN, GraphNet, etc), and the basic operations (scattering, sparse operations) remain...
At this point, I cannot allocate enough time to lead development of ``mlx-graph``, but I plan to contribute to it. I think it is the best to develop core ``mlx-graph``...
> I'm considering to create a chat group to ease quick communication between us, @francescofarina @gboduljak do you use X? I do not use X :(. Please create the group...