distributed-learning-contributivity icon indicating copy to clipboard operation
distributed-learning-contributivity copied to clipboard

Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.

Results 39 distributed-learning-contributivity issues
Sort by recently updated
recently updated
newest added
trafficstars

https://arxiv.org/pdf/1602.05629.pdf https://arxiv.org/pdf/1805.09767.pdf https://arxiv.org/pdf/1908.07873.pdf https://arxiv.org/pdf/1912.04977.pdf

help wanted
question
nice_to_have

Increasing the number of minibatch seems to increase the time to do a federated run. This might be due to the process of: - creating the model - setting the...

question
experiment
need test

- [x] Elaborate the scenario - [x] Identify a public dataset of choice for running this experimental scenario - [ ] Adapt library for working with this public dataset (in...

experiment

Now each scenario is processed sequentially. Since scenarii are independant we could parallelize the computations. One thing to be wary of is the GPU memory if we learn too many...

nice_to_have
optimization

The current initialization of s-models is done with a matrix which shape is hardcoded to be 10-by-10. This shape should be computed from the model

bug

With the current implementations of the various mpls, the computational time is not at all related to a true federated scenario, for two main reasons: - the partners cannot train...

enhancement
help wanted
nice_to_have
optimization

And/or also add saving results of scenarios run alone

bug
enhancement

Until now, we have been saving only the global accuracy and loss of the models. As the multipartner learning scenarios are getting more complex, it would be interesting to evaluate...

enhancement
nice_to_have