Cory Cornelius
Cory Cornelius
Can you verify that these two images collide?   Here's what I see from following your directions: ```bash $ python3 nnhash.py NeuralHash/model.onnx neuralhash_128x96_seed1.dat beagle360.png 59a34eabe31910abfb06f308 $ python3 nnhash.py NeuralHash/model.onnx...
Is it possible to export the patch (https://github.com/twosixlabs/armory/blob/388edde7d85f96dac6a96c13854b955f1bb5c3c3/armory/art_experimental/attacks/carla_obj_det_patch.py#L284) that is generated by the `armory.art_experimental.attacks.carla_obj_det_patch.CARLADapricotPatch` attack? Right now when `export_samples > 0` Armory outputs the patched images (https://github.com/twosixlabs/armory/blob/388edde7d85f96dac6a96c13854b955f1bb5c3c3/armory/art_experimental/attacks/carla_obj_det_patch.py#L298). Having the adversarial...
Would it be possible to add `kwargs` to the dataset configuration? Those `kwargs` could then be passed along to the `dataset_fn`. This would enable us to work around the slow...
Would it be possible to add the baseline scenario output JSON files to the repository? For example, it would be nice to have the JSON output for the UCF-101 baseline...
The PyTorch model in [shadow_metric.ipynb](https://github.com/privacytrustlab/ml_privacy_meter/blob/master/docs/shadow_metric.ipynb) uses `nn.CrossEntropyLoss`, which expects unnormalized logits. However, the model outs probabilities due to the use of `nn.Softmax`. This causes the model to not achieve 100%...
# What does this PR do? By default, we run the validation on the best model obtained at the end. ## Type of change Please check all relevant options. -...
Right now `LitModular` does not support `ChainedScheduler`. This is because a `ChainedScheduler` does not take an optimizer, but rather a list of schedulers: https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html `LitModular` assumes `lr_scheduler` is a callable...
Right now our configuration is rather verbose when it comes to models: ```yaml model: _target_: mart.models.LitModular _convert_: all output_preds_key: preds output_target_key: target modules: ... training_sequence: ... ... ``` We could...
Right now MART assumes inputs are between [0-255]. This is a fine choice for 8-bit images, however, as we move towards multi-modal inputs like 24-bit depth, it might be better...
# What does this PR do? This obseletes #135 and #177. However, we should wait to merge this until we can remove the detection code and directly use torchvision. ##...