distributed-learning-contributivity
distributed-learning-contributivity copied to clipboard
Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
https://arxiv.org/pdf/1602.05629.pdf https://arxiv.org/pdf/1805.09767.pdf https://arxiv.org/pdf/1908.07873.pdf https://arxiv.org/pdf/1912.04977.pdf
Increasing the number of minibatch seems to increase the time to do a federated run. This might be due to the process of: - creating the model - setting the...
- [x] Elaborate the scenario - [x] Identify a public dataset of choice for running this experimental scenario - [ ] Adapt library for working with this public dataset (in...
Now each scenario is processed sequentially. Since scenarii are independant we could parallelize the computations. One thing to be wary of is the GPU memory if we learn too many...
The current initialization of s-models is done with a matrix which shape is hardcoded to be 10-by-10. This shape should be computed from the model
With the current implementations of the various mpls, the computational time is not at all related to a true federated scenario, for two main reasons: - the partners cannot train...
And/or also add saving results of scenarios run alone
Until now, we have been saving only the global accuracy and loss of the models. As the multipartner learning scenarios are getting more complex, it would be interesting to evaluate...