Cory Cornelius
Cory Cornelius
It looks like the newer PyTorch 2.0 has resolved the issue: https://github.com/pytorch/pytorch/pull/91517#issuecomment-1477998981 Could you add a note in the comment? I think we will upgrade to PyTorch 2.0 soon after...
For some reason, setting `trainer.devices=2` does not also set `datamodule.world_size` to 2 even though this line exists: https://github.com/IntelLabs/MART/blob/eced15bdad18b6683190997590e2500a332b03e7/mart/configs/datamodule/default.yaml#L11 I think this is because many of the experiments modify `datamodule.world_size`: https://github.com/IntelLabs/MART/blob/eced15bdad18b6683190997590e2500a332b03e7/mart/configs/experiment/CIFAR10_CNN.yaml#L27...
Why can't this be a `LitModular`? Then we inherit all of the optimizer functionality. The problem, I think, is the modules functionality. However, that should, in theory, cleanup the use...
Right now, the `Adversary` checks whether `model` is present in order to determine when to attack: https://github.com/IntelLabs/MART/blob/a2f936e5c4486e179fd8e47d03301b0f8bd16e9a/mart/attack/adversary.py#L314 However, because an `Adversary` can live at any layer, it should really check...
I think this will enable MART to get rid of `DotDict`: https://github.com/IntelLabs/MART/blob/422978c233adff42cbc39674acf7cf8ebacf8348/mart/nn/nn.py#L162 by `hydra.utils.get_object` in Hydra 1.3.2: https://github.com/facebookresearch/hydra/blob/2b5cb67edb88161d6dc4b62f90e867bcda68fe00/hydra/utils.py#L70
Metric configs should automatically override `optimized_metric`. That is, there is no reason this: https://github.com/IntelLabs/MART/blob/2c62aad375e146036696e88c09f5cb3a0f7131fd/mart/configs/experiment/COCO_TorchvisionRetinaNet.yaml#L13 shouldn't be automatically include when adding this: https://github.com/IntelLabs/MART/blob/2c62aad375e146036696e88c09f5cb3a0f7131fd/mart/configs/experiment/COCO_TorchvisionRetinaNet.yaml#L6
Metrics should automatically override `callbacks.model_checkpoint.monitor`. That is, there is no reason this: https://github.com/IntelLabs/MART/blob/2c62aad375e146036696e88c09f5cb3a0f7131fd/mart/configs/callbacks/model_checkpoint.yaml#L9 Shouldn't be able to be overridden when including this: https://github.com/IntelLabs/MART/blob/2c62aad375e146036696e88c09f5cb3a0f7131fd/mart/configs/experiment/COCO_TorchvisionRetinaNet.yaml#L6
Right now it is not possible to use extended transforms with `Lambda` and `SplitLambda`. For example, it would be useful to do something like: ```yaml _target_: mart.transforms.SplitLambda lambd: _target_: mart.transforms.Compose...
# What does this PR do? ``` Testing DataLoader 0: 86%|█████████████████████████████████████████████████████████████████████████▋ | 6/7 [20:09
I am increasingly become disillusioned with YAML configuration (non-standard errors, partial resolving, etc.). It would be really nice to implement Python-based configuration like Detectron2: https://github.com/facebookresearch/detectron2/blob/0ae803b1449cd2d3f8fa1b7c0f59356db10b3083/detectron2/config/lazy.py#L210