Cory Cornelius
Cory Cornelius
Should this be closed @mzweilin?
I veto https://github.com/Tianxiaomo/pytorch-YOLOv4 because it isn't torch-like due its use of `self.inference` state that smells a lot like `self.training`. That will cause all kinds of headaches.
Interestingly, https://github.com/AlexeyAB/Yet-Another-YOLOv4-Pytorch implements "Self adversial training with fgsm": https://github.com/AlexeyAB/Yet-Another-YOLOv4-Pytorch/commit/ea3e0dc2c6d532de8d0ac342c0c9857a00056574
The comment above this line is 🔑, although abusing primitive types is a bad idea too. Probably should be a dataclass that implements the mapping protocol so `**` works.
@mzweilin: i could use your help on thinking about how to compose composers. the problem i'm seeing is that some composer want to modify the input and others want to...
We can also just not worry about composition of composers.
We should really be using `torch.isclose` anyways.
It would be really interesting to see if one could use `Adversary` in an lightning `Callback`. That is, could we fit `Adversary` in [`Callback.on_train_batch_start`](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html#on-train-batch-start). If we can, then we could...
I should note that I'm not sure this works in multi-gpu mode.
> I should note that I'm not sure this works in multi-gpu mode. This does work but one must beware that `BatchNorm` modules get turned into `SyncBatchNorm` when using DDP:...