Train stops when a client fails
Describe the bug
When a round encounters failures because of Grpc-Bridge is closed for one of the clients, the whole training stops. First, it wasn't doing evaluation after fitting. Thus, I disabled evaluation. Now if the first round has failures, the second round doesn't start!
Steps/Code to Reproduce
I am using the code example here https://flower.ai/docs/examples/embedded-devices.html Most of the time, my topo work, but when the GRPC bridge close (not sure why), the training stops.
Expected Results
The training should continue, ignoring the failed devices when accept_failures is True. Or, the server should try to crete a new GRPC connection (bridge).
Actual Results
The training doesn't continue when accept_failures is True.
Hey @oabuhamdan , thanks for opening this issue. That example still needs to be updated to the new way of using Flower (i.e. via flwr run). Most of the other examples have been updated, see for example the recently-updated https://github.com/adap/flower/tree/main/examples/flower-authentication.
We plan to update the embedded devices example by the end of the week. Are you interested in starting this effort? Do you have some bandwidth? The steps aren't too complex or different from other examples using the Deployment Engine (as the authentication example does). For this we just need:
- Indicate users how to launch a
SuperLinkandSuperExecin a workstation or laptop. - Then indicate the
flower-supernodeneeds to be executed in the RPis - With the above in place and in an "idling" state, someone will do
flwr run .and this will start theRun.
I'd suggest first starting with the bare minimum and then add more features (like the SSL certificates). Using docker is not needed. We can first focus on RPi devices (later verify things work on Jetsons. Feel free to change the models and datasets that are used in the example (but let's keep the workload lightweight if possible).
Let me know if you'd like to start this effort!
Hi,
thanks for raising this. The new code example now solves this (due to updated version) so I will close this issue.