privacy icon indicating copy to clipboard operation
privacy copied to clipboard

AssertionError: compute_gradients() on the differentially private optimizer was not called. Which means that the training is not differentially private. It happens for example in Keras training in TensorFlow 2.0+.

Open mcps5601 opened this issue 4 years ago • 3 comments

Dear Developers,

Thank you for implementing and maintaining such a great repository. I tried to train my model without eager execution in TensorFlow version 2.3.1. Some code snippets are below:

optimizer = DPGradientDescentGaussianOptimizer(
        l2_norm_clip=1.0,
        noise_multiplier=0.1,
        num_microbatches=args.batch_size,
        learning_rate=0.15
)

with tf.GradientTape() as gradient_tape:
    loss_real = tf.keras.losses.binary_crossentropy(tf.ones_like(Y_real), Y_real, from_logits=True)
    loss_fake = tf.keras.losses.binary_crossentropy(tf.zeros_like(Y_fake), Y_fake, from_logits=True)
    loss = loss_real + loss_fake
            
var_list = self.model.trainable_weights
grads = gradient_tape.gradient(loss, var_list)
optimizer.apply_gradients(zip(grads, var_list))

How can I solve this problem? Does TensorFlow Privacy now support the newer version of Optimizer in TensorFlow2.x?

mcps5601 avatar Oct 28 '20 12:10 mcps5601

Hi! have you sloved ur problem yet? I 'm facing the same question during switch my optimizer with the dpoptimizer ` # training updates pretrain_opt = self.g_optimizer(self.learning_rate)

    self.pretrain_grad, _ = tf.clip_by_global_norm(tf.gradients(self.pretrain_loss, self.g_params), self.grad_clip)
    self.pretrain_updates = pretrain_opt.apply_gradients(zip(self.pretrain_grad, self.g_params))`

the self.g_optimizer is from this part: def g_optimizer(self, *args, **kwargs): return dptf.DPAdamGaussianOptimizer(1, 0.001, 1,*args, **kwargs)

Flyige avatar May 13 '22 16:05 Flyige

Hi @Flyige and @mcps5601 ...

Sorry for the delay in replying. The DP-SGD optimizers work by clipping the per-microbatch gradients and adding noise. This is done in the method compute_gradients. If you call apply_gradients without first calling compute_gradients, supplying your own gradients (computed by calling tf.gradients or something like that), then the results won't be private. The error message is a warning that you should be using compute_gradients to compute your gradients.

schien1729 avatar May 18 '22 23:05 schien1729

Im getting the same error if you could help me with the adding the differential privacy to the GAN it would be great help.

ArunTellis avatar Jan 24 '24 13:01 ArunTellis