DataPoisoning_FL
DataPoisoning_FL copied to clipboard
Code for Data Poisoning Attacks Against Federated Learning Systems
here, Malicious participants are not visible with mentioned settings. How to get output as in paper?
here, Malicious participants are not visible with mentioned settings. How to get output as in paper?  _Originally posted by @AriesQa in https://github.com/git-disl/DataPoisoning_FL/issues/1#issuecomment-858541844_
Hi there, I am performing the label flipping attack feasibility, but how do I see the results according to Table (2) in the paper? Is it in the log? or...
Hi, Could you please solve this issue: I have changed the address for a model pass and export to :  But I have this error still: What is the...
After running the command,it gives this error "Traceback (most recent call last): File "/content/DataPoisoning_FL/defense.py", line 69, in model_files = sorted(os.listdir(MODELS_PATH)) FileNotFoundError: [Errno 2] No such file or directory: '/absolute/path/to/models/folder/1823_models'
Hello, Thank you for your outstanding work on the FL defense mechanisms. I'm currently in the process of reproducing your implementation to gain a deeper understanding of how these defenses...