opacus icon indicating copy to clipboard operation
opacus copied to clipboard

Differential Privacy with Varying Epsilon Values in parameter: No Impact on Output

Open Hafizamariaiqbal opened this issue 1 year ago • 4 comments

🐛 Bug

Using Differential privacy code , hypermeter (eps) value change do not effect the result.

initialize clients numbers

client_num = 10

load data

d = load_mnist(client_num)

lr = 0.01 fl_param = { 'output_size': 10, # number of units in output layer 'client_num': client_num, # number of clients 'C': 1, 'model': MLP, # model 'data': d, # dataset 'q': 0.1, # sampling rate 'tot_T': 100, # number of aggregation times 'E': 5, # number of local iterations 'batch_size': 64, 'lr': 0.01, # learning rate 'clip': 1, # clipping norm 'eps': 0.5, # privacy budget for each global communication

# 'eps': 50,                      # privacy budget for each global communication
'delta': 1e-5,                  # approximate differential privacy: (epsilon, delta)-DP
'device': device

} import warnings warnings.filterwarnings("ignore") start_time = time.time() fl_entity = FLServer(fl_param).to(device) print('Currently performing FL with DP ---------------------------:') print(datetime.datetime.now().strftime("%A, %d. %B %Y %I:%M%p")) print(f'CPUs: {torch.get_num_threads()}, GPUs: {torch.cuda.device_count()} on {socket.gethostname()}.') acc = [] loss = [] for t in range(fl_param['tot_T']): fl_entity.set_lr(lr) ''' Update the local model according to weights update ''' loss_, acc_ = fl_entity.global_update() loss += [loss_] acc += [acc_] print("Round = {:d}, loss={:.4f}, acc = {:.4f}".format(t+1, loss[-1], acc[-1])) print(f'Total time: {time.time() - start_time :.2f} s.')

Hafizamariaiqbal avatar Mar 02 '23 09:03 Hafizamariaiqbal

Can you please share more information, like the test accuracy you obtain at various levels of epsilon?

alexandresablayrolles avatar Mar 07 '23 12:03 alexandresablayrolles

Can you please share more information, like the test accuracy you obtain at various epsilon levels?

Yes, first I execute code with epsilon value 0.5 with different noise added. In the next experiment, I did with the same parameter but change the epsilon value to 10, but no change after that I increase the value to 50 but the same results and epsilon in each round. No accuracy increases or decreases with the change of epsilon.

Hafizamariaiqbal avatar Mar 07 '23 14:03 Hafizamariaiqbal

I am experiencing the same issue, partly along with this reported issue. I am using neural network for tabular data.

I have experimented with different epsilon values, but the results remain relatively consistent. Interestingly, when using TensorFlow Privacy with the same model architecture, DP-optimizer, and hyperparameters, this problem does not arise.

zedoul avatar May 04 '23 15:05 zedoul

As an immediate step, could you print out the value of noise_multiplier (https://github.com/pytorch/opacus/blob/main/opacus/privacy_engine.py#L475)? That is the amount of noise added to the system. Sometimes when the noise difference is small, the model is robust to the noise change.

HuanyuZhang avatar Apr 24 '24 14:04 HuanyuZhang