FederatedScope icon indicating copy to clipboard operation
FederatedScope copied to clipboard

Server global evaluation total number

Open shuangyichen opened this issue 1 year ago • 1 comments

Below is a eval record of server. I wonder why test_total and val_total are decimals? From my understanding, this is global model evaluation. If the total number of my test set is 872, the number of test samples in each evaluation is 872 or 872/num of clients (in my case, it is 3)?

{'Role': 'Server #', 'Round': 1, 'Results_weighted_avg': {'test_loss': 205.44600348516343, 'test_total': 290.6666666666667, 'test_avg_loss': 0.7068027881307339, 'test_acc': 0.5091743119266054, 'val_loss': 154.90539731894472, 'val_total': 224.66666666666666, 'val_avg_loss': 0.6894755830396885, 'val_acc': 0.5534124629080118}, 'Results_avg': {'test_loss': 205.44401041666666, 'test_total': 290.6666666666667, 'test_avg_loss': 0.7067977845605226, 'test_acc': 0.5091875024686969, 'val_loss': 154.90218098958334, 'val_total': 224.66666666666666, 'val_avg_loss': 0.6894642857142856, 'val_acc': 0.5534457671957672}, 'Results_fairness': {'test_total': 290.6666666666667, 'val_total': 224.66666666666666, 'test_loss_std': 2.7915573098698245, 'test_loss_bottom_decile': 203.2431640625, 'test_loss_top_decile': 209.3828125, 'test_loss_min': 203.2431640625, 'test_loss_max': 209.3828125, 'test_loss_bottom10%': nan, 'test_loss_top10%': 209.3828125, 'test_loss_cos1': 0.9999076969612101, 'test_loss_entropy': 1.0985202582012823, 'test_avg_loss_std': 0.009149269452723627, 'test_avg_loss_bottom_decile': 0.6984301170532646, 'test_avg_loss_top_decile': 0.7195285652920962, 'test_avg_loss_min': 0.6984301170532646, 'test_avg_loss_max': 0.7195285652920962, 'test_avg_loss_bottom10%': nan, 'test_avg_loss_top10%': 0.7195285652920962, 'test_avg_loss_cos1': 0.999916228188608, 'test_avg_loss_entropy': 1.0985287222378004, 'test_acc_std': 0.02519823611017372, 'test_acc_bottom_decile': 0.4742268041237113, 'test_acc_top_decile': 0.5326460481099656, 'test_acc_min': 0.4742268041237113, 'test_acc_max': 0.5326460481099656, 'test_acc_bottom10%': nan, 'test_acc_top10%': 0.5326460481099656, 'test_acc_cos1': 0.998777755685617, 'test_acc_entropy': 1.097375117975531, 'val_loss_std': 2.4467666521872635, 'val_loss_bottom_decile': 152.734375, 'val_loss_top_decile': 158.32177734375, 'val_loss_min': 152.734375, 'val_loss_max': 158.32177734375, 'val_loss_bottom10%': nan, 'val_loss_top10%': 158.32177734375, 'val_loss_cos1': 0.9998752734853387, 'val_loss_entropy': 1.0984879472148361, 'val_avg_loss_std': 0.010041464909396417, 'val_avg_loss_bottom_decile': 0.6818498883928571, 'val_avg_loss_top_decile': 0.70365234375, 'val_avg_loss_min': 0.6818498883928571, 'val_avg_loss_max': 0.70365234375, 'val_avg_loss_bottom10%': nan, 'val_avg_loss_top10%': 0.70365234375, 'val_avg_loss_cos1': 0.9998939595599015, 'val_avg_loss_entropy': 1.0985065869345862, 'val_acc_std': 0.026944572693224266, 'val_acc_bottom_decile': 0.5155555555555555, 'val_acc_top_decile': 0.5758928571428571, 'val_acc_min': 0.5155555555555555, 'val_acc_max': 0.5758928571428571, 'val_acc_bottom10%': nan, 'val_acc_top10%': 0.5758928571428571, 'val_acc_cos1': 0.9988169822369142, 'val_acc_entropy': 1.0974135282771185}}

shuangyichen avatar Jan 25 '24 21:01 shuangyichen

Yes, you are right.

rayrayraykk avatar Mar 05 '24 03:03 rayrayraykk