adversarial-robustness-toolbox
adversarial-robustness-toolbox copied to clipboard
Dev 1.14.0
Description
Please include a summary of the change, motivation and which issue is fixed. Any dependencies changes should also be included.
Fixes # (issue)
Type of change
Please check all relevant options.
- [ ] Improvement (non-breaking)
- [ ] Bug fix (non-breaking)
- [ ] New feature (non-breaking)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
- [ ] Test A
- [ ] Test B
Test Configuration:
- OS
- Python version
- ART version or commit number
- TensorFlow / Keras / PyTorch / MXNet version
Checklist
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] My changes have been tested using both CPU and GPU devices
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
0400813
) 85.60% compared to head (8c69e55
) 85.37%.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## dev_1.18.0 #2341 +/- ##
==============================================
- Coverage 85.60% 85.37% -0.23%
==============================================
Files 324 327 +3
Lines 29326 30205 +879
Branches 5407 5589 +182
==============================================
+ Hits 25104 25789 +685
- Misses 2840 2966 +126
- Partials 1382 1450 +68
Hi @OrsonTyphanel93 Could you please add a description and title to this PR?
The backdoor attack, TranStyBack, involves the insertion of malicious triggers (audio clapping) into audio data using digital musical effects. The triggers were generated on the basis of six different styles, each with specific parameters. These stylistic triggers are then applied to the audio data during the backdoor attack phase. The attack involves poisoning a subset of the training data, specifically up to 1% of the samples, during which the trigger is adjusted to match the duration of the audio data, ensuring correct alignment. The backdoor attack is implemented by adding the scaled trigger values to the corresponding audio samples.
Hi @OrsonTyphanel93 Could you please add a description and title to this PR?
Stylistic Backdoors in audio data (TranStyBack)
The backdoor attack, TranStyBack, involves the insertion of malicious triggers (audio clapping) into audio data using digital musical effects. The triggers were generated on the basis of six different styles, each with specific parameters. These stylistic triggers are then applied to the audio data during the backdoor attack phase. The attack involves poisoning a subset of the training data, specifically up to 1% of the samples, during which the trigger is adjusted to match the duration of the audio data, ensuring correct alignment. The backdoor attack is implemented by adding the scaled trigger values to the corresponding audio samples.