flower icon indicating copy to clipboard operation
flower copied to clipboard

Different configuration parameters per client

Open mh-mahmoud opened this issue 2 years ago • 1 comments

Hello, Thanks for making this great work public.

How can I send different parameters for each client? for example, I want each client to run a different number of iterations. Also, I want this parameter to be configured by the server.

currently, I am assigning an id for each client and running each client on a different machine.

I tried to modify the fit_confi on the strategy as below and then select the parameter on the client-side based on its ID. But, it did not work as the config parameter should be dict[str, scaler]

def fit_config(self,rnd: int): config = { "learning_rate": 0.01, "batch_size": [32, 64] "local_epochs": [16, 8] } return config

Do you have any workaround or a better method to implement this?

Thanks

Originally posted by @mh-mahmoud in https://github.com/adap/flower/discussions/1031

mh-mahmoud avatar Jan 27 '22 16:01 mh-mahmoud

As you pointed out, we currently have the next FitIns definition:

@dataclass
class FitIns:
    """Fit instructions for a client."""

    parameters: Parameters
    config: Dict[str, Scalar]

In this sense, we have already set our dictionary types for keys and values. In your case, you want every client to receive a different value, so you would rely mostly on the client manager for sending different FitIns to each client.

So, our current source code for the configure_fit function looks like this for FedAvg strategy:

def configure_fit(
    self, rnd: int, parameters: Parameters, client_manager: ClientManager
) -> List[Tuple[ClientProxy, FitIns]]:
    """Configure the next round of training."""
    config = {}
    if self.on_fit_config_fn is not None:
        # Custom fit config function provided
        config = self.on_fit_config_fn(rnd)
    fit_ins = FitIns(parameters, config)

    # Sample clients
    sample_size, min_num_clients = self.num_fit_clients(
        client_manager.num_available()
    )
    clients = client_manager.sample(
        num_clients=sample_size, min_num_clients=min_num_clients
    )

    # Return client/config pairs
    return [(client, fit_ins) for client in clients]

This solution currently assigns the same fit_ins for each one of the clients, as could be seen in the function return line, where the returned array is created. If you wish to set a different configuration for each one of the clients a first step would be on modifying the fit_ins object for each one of the clients, so you are not assigning the same fit_ins object for the whole set of clients.

As an example, for setting different learning rates I would go for a solution such as this:

def configure_fit(
    self, rnd: int, parameters: Parameters, client_manager: ClientManager
) -> List[Tuple[ClientProxy, FitIns]]:
    """Configure the next round of training."""
    config = {}
    if self.on_fit_config_fn is not None:
        # Custom fit config function provided
        config = self.on_fit_config_fn(rnd)

    # Sample clients
    sample_size, min_num_clients = self.num_fit_clients(
        client_manager.num_available()
    )
    clients = client_manager.sample(
        num_clients=sample_size, min_num_clients=min_num_clients
    )

    fit_ins_array = [
        FitIns(parameters, dict(config, **{"learning_rate": 0.01*(idx+1)}))
        for idx,_ in enumerate(clients)]
    # Return client/config pairs
    return [(client, fit_ins_array[idx]) for idx,client in enumerate(clients)]

The aforementioned solution would go on with setting a different learning rate for each client, where it would be set to 0.01 for the first selected client, 0.02 for the second selected client, etc. Take into account that this solution has not been tested and some minor changes might be required for it to work. I divided the process of the return into two steps (by setting a fit_ins_array creation) for clarity purposes, but this could be simplified into one step by adding the FitIns object creation in the return line.

Could you kindly check this for your current developed strategy and give feedback?

sisco0 avatar Jan 30 '22 07:01 sisco0