federated-learning
federated-learning copied to clipboard
about the implementation of FedAvg
Why does the FedAvg use a simple average without weight?
Why does the FedAvg use a simple average without weight?
Hi, I have the same question with you. Have you solve it ?
I think maybe the num of samples (train + test) for each client are the same and therefore the weight for each client are the same. So we can directly average them. This is my viewpoint.
Why does the FedAvg use a simple average without weight?
Hi, I have the same question with you. Have you solve it ?
For FedAvg, loss=p1L1+...+pkLk, where pi=ni/n, Li=li/ni. I think this implementation changes it as nloss=L1+...+Lk. And for this loss function, it can use this implementation. For loss=p1L1+...+pk*Lk, you need to divide ni for each loss function in local clients and then weighted sum their parameters.