pytorch-image-models
pytorch-image-models copied to clipboard
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT)...
Recently facebook research team developed a method called DINO, as I was going through the repository, I found that there's a way to visualize the working of neural network(similar to...
It would be great to have the option to add `AntiAliasDownsampleLayer` to efficientnet blocks. We could see a boost in accuracy, and it'd definitely provide models with more stable predictions...
https://github.com/idstcv/ZenNAS faster, bigger and more precise versions of GPU-Efficient-Networks ( https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/byobnet.py ).
Along with updated training/validation components in #458 for TPU support, support use of DeepSpeed/ZeRO * https://pytorch.org/docs/master/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer * https://github.com/microsoft/DeepSpeed It would be fairly easy to support w/ current training code, however...
**Is your feature request related to a problem? Please describe.** Is it possible to add pretrained weights for the paper [TimeSformer](https://arxiv.org/pdf/2102.05095v2.pdf) This paper extends the Vision Transformer to attention in...
https://arxiv.org/abs/1911.09665 In the paper, they propose calculating two losses: one for the forward pass with "clean" BN params, and another for the forward pass with adversarial BN params. Then they...
I was trying GradCam implementation from this repo on EfficientNet https://github.com/jacobgil/pytorch-grad-cam but could not make it work with. Size mismatch error occurs in the last layer. How about adding functionality...
Hi rwightman, Thank you for providing the timm tool which is very useful! When I use CheckpointSaver in timm.utils to save checkpoint, I met this issue: `os.link(last_save_path, save_path) OSError: [Errno...
- Moved all `argparse` configs to `.yaml` files. All configs are parsed from the `.yaml` file. If necessary, any parameter can be written to the terminal as before and these...
Any plan to add the backbones of these [detectors](https://paperswithcode.com/paper/mobiledets-searching-for-object-detection) to the model zoo?