federated-learning icon indicating copy to clipboard operation
federated-learning copied to clipboard

Testing accuracy is very low

Open lynnprosper opened this issue 4 years ago • 13 comments

Dear, First thank you for your code. I have run your code, however, the result is not satisfying. Result: Training accuracy: 43.00 Testing accuracy: 43.00

my cmd:

python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid

look forward to your reply. best wishes~

lynnprosper avatar Sep 05 '19 12:09 lynnprosper

Hi, I met too. Have you solved it?

LYF14020510036 avatar Oct 15 '19 02:10 LYF14020510036

Yes, you're right. I never reproduced the acc as reported. Try more epochs and data augmentation, i achieved 60+, but still low.

shaoxiongji avatar Oct 15 '19 03:10 shaoxiongji

Similar issues in another repo https://github.com/AshwinRJ/Federated-Learning-PyTorch/issues/2

shaoxiongji avatar Oct 15 '19 21:10 shaoxiongji

Thanks a lot. I also used some parts of your code. It's very clear and useful.

I congratulate you for this nice codes.

najeebjebreel avatar Nov 15 '19 21:11 najeebjebreel

cut down the number of args.num_users may work

EEstq avatar Aug 11 '20 09:08 EEstq

Thanks for your code. I have a question regarding following lines:

`num_shards, num_imgs = 200, 300 idx_shard = [i for i in range(num_shards)] dict_users = {i: np.array([], dtype='int64') for i in range(num_users)} idxs = np.arange(num_shards*num_imgs) labels = dataset.train_labels.numpy()

# sort labels
idxs_labels = np.vstack((idxs, labels))
idxs_labels = idxs_labels[:,idxs_labels[1,:].argsort()]
idxs = idxs_labels[0,:]

# divide and assign
for i in range(num_users):
    rand_set = set(np.random.choice(idx_shard, 2, replace=False))
    idx_shard = list(set(idx_shard) - rand_set)
    for rand in rand_set:
        dict_users[i] = np.concatenate((dict_users[i], idxs[rand*num_imgs:(rand+1)*num_imgs]), axis=0)`

Are you setting a fixed number of images for each user in this part equal to 600? So it works in case that we have 100 client?

Minoo-Hsn avatar Aug 20 '20 23:08 Minoo-Hsn

@Minoo-Hsn yes, but you can change via --num_users

shaoxiongji avatar Oct 03 '20 16:10 shaoxiongji

Hi, shaoxiong~ I've read your code, it's nice, but I still cannot figure out this line in your Readme.md: "The scripts will be slow without the implementation of parallel computing." So, is that means we readers have to implement parallel-computing by ourselves? Thank you~

Sprinter1999 avatar Nov 02 '20 07:11 Sprinter1999

@Sprinter1999 yes

shaoxiongji avatar Nov 02 '20 12:11 shaoxiongji

Dear, First thank you for your code. I have run your code, however, the result is not satisfying. Result: Training accuracy: 43.00 Testing accuracy: 43.00

my cmd:

python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid

look forward to your reply. best wishes~

me too . low acc!

Pnme79 avatar May 23 '23 08:05 Pnme79

increase the numbers of local epochs may work. Obviously, the running time will also increase.

my cmd:

python main_fed.py --dataset cifar --num_channels 1 --model cnn --epochs 10 --gpu 0 --iid --local_ep 10

Result: Training accuracy: 50.45 Testing accuracy: 48.43

XiaoshuangJi avatar Aug 01 '23 08:08 XiaoshuangJi

However, increasing the numbers of local epochs blindly may do harm to acc and cost longer running time. When I change local_ep from 10 to 15 or 20, the acc is even lower.

XiaoshuangJi avatar Aug 01 '23 08:08 XiaoshuangJi

Your experimental results make sense. In Non-iid scenario, too much local training does harm to the generalization of the global model of FedAvg.

Sprinter1999 avatar Aug 01 '23 09:08 Sprinter1999