federated
federated copied to clipboard
How interpret results
I try the code of federated_learning_for_text_generation but with changing dataset and the loaded model, and I find those results
Round 0 \Eval metrics: loss=13883295744.000, accuracy=0.500 Train metrics: loss=nan, accuracy=0.347
Round 1 \Eval metrics: loss=0.899, accuracy=0.750 Train metrics: loss=0.947, accuracy=0.673
Round 2 \Eval metrics: loss=0.656, accuracy=0.750 Train metrics: loss=0.946, accuracy=0.673
Round 3 \Eval metrics: loss=0.656, accuracy=0.750 Train metrics: loss=0.946, accuracy=0.673
But I can't understand why accuracy and it does not increase over rounds.
Can anyone help me to interpret those results.
Thanks
@aynesss It looks like it is training, since the loss is decreasing. It's worth noting that a round is only a small model change, so I wouldn't expect the accuracy to increase significantly from one round to the next. Generally, we've had to run thousands of rounds to train a model (though this depends on the number of clients participating in a round).
Does that answer your question? This might be a more appropriate question for the stack overflow tag, unless there's a specific bug/feature request on the github code that you'd like to discuss.