federated
federated copied to clipboard
Can TFF learning be executed on client machine data
I am trying to implement federated learning with Tensorflow Federated. I am not able to run the tensorflow model on the dataset existing on the client machine. The process followed is as below.
I have one server machine which host the dataset to be used for federated learning. I have created the model and TFF learning average process in the server. The remote executor service is running on a client machine. I tried to connect to the client machine and run the model(python program) on the client machine with the dataset present in the client machine but the program looks for the dataset in the server machine and fails. I would like to know whether the process flow is correct and can the TFF model be executed on dataset hosted in client machine.
This is definitely possible. The first question I think I might ask is: are you installing a remote TFF runtime?
By default, TFF will install its local runtime when imported--so without an explicit call to process computations in this remote setting, TFF will attempt to process the whole thing locally. If you can construct and pass in GRPC Channels representing connections to your workers, the symbol above should work for you.
Finally: this question might be better suited for StackOverflow with the tensorflow-federated
tag, since it is a bit more of a usage question than a bug or feature request for TFF code.
Thank you for you response!!
- Yes, the tff-runtime is running on the client machine.
- The model broadcast is working fine with the .next process.
- The model training is executed but with the local data passed as parameter.
- Only thing which is not clear is how to train the model on the data hosted on client machine. I did not find any relevant example or document relevant to it.
I have already posted the same question on StackOverflow but did not get a response :-)
Apologies, I don't monitor the TFF tags on SO as closely as I once did...
I'm not aware of any full-on tutorials we have today. Probably the easiest place to start is with this binary, which essentially assumes you're got a service binary running 'elsewhere'. An example worker binary can be found here, though I'm not 100% sure it's integration tested--it wouldn't shock me if a symbol or two needs updating, since it's happened before.
I referred the mentioned binaries and I am able to broadcast and execute the model remotely. But need to pass the dataset or client_id of the dataset with the broadcast. Is there a way the broadcasted model works with the client data locally on the client device without passing the data with the broadcast.
@Crazynovatech did you solve this issue?
Bump
I am facing the same issue as well!
Bumping this, facing the same issue. There doesn't seem to be explicit documentation detailing how to run federated training where federated data is distributed across the different client nodes.