ignite
ignite copied to clipboard
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
## 🚀 Feature Since v1.7.0 pytorch introduced `persistent_workers` argument to `DataLoader`: https://pytorch.org/docs/1.7.0/data.html#torch.utils.data.DataLoader This can reduce the time of dataloader creation for each epoch with native pytorch distributed config: - with...
## 🐛 Bug description As mentioned in this [issue](https://github.com/Project-MONAI/MONAI/issues/2080) in MONAI, I tried to run [this tutorial code](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/distributed_training/unet_training_workflows.py) with `torch.distributed.launcher`. However, the program froze at instantiating the [CheckpointSaver](https://github.com/Project-MONAI/MONAI/blob/dev/monai/handlers/checkpoint_saver.py). The reason...
## 🚀 Feature Hi @vfdev-5 , As you may know, PyTorch and MLflow Integration announced last year: https://mlflow.org/news/2020/11/12/pytorch-mlflow-integration/index.html, it works well with `PyTorch-Lightning` but seems ignite doesn't support it so...
Fixes # Description: This add exponential annealing to the contrib handler. The goal being to be able to mimic fast.ai learning rate ability. The [LRFinder](https://github.com/fastai/fastai/blob/master/fastai/callbacks/lr_finder.py#L9) [uses](https://github.com/fastai/fastai/blob/master/fastai/callbacks/lr_finder.py#L14) [annealing_exp](https://github.com/fastai/fastai/blob/3ff819262ff0896e4b66febbc2ffabf56e8e95f2/fastai/callback.py#L320) to find the...
Currently Engine implicitly assumes : https://github.com/pytorch/ignite/blob/e3ef192c94d0d793a9303bec915fb846aaa3161f/ignite/engine/engine.py#L122 but we recently introduced possibility to trigger a run with `max_iters`. If epoch length is unknown, saving and reloading engine's state probably wont work....
## 🚀 Feature Following the discussions from https://github.com/Project-MONAI/MONAI/issues/1987 , we may think of providing a unified data structure where we could merge input batch and output to simplify data access...
## 🚀 Feature Pytorch lightning recently [added](https://pytorch-lightning.readthedocs.io/en/latest/advanced/advanced_gpu.html#ddp-optimizations) native support for [MS DeepSpeed](https://github.com/microsoft/DeepSpeed). I believe it is also helpful for users if ignite incorporates the DeepSpeed pipeline for memory-efficient distributed training....
### Description I recently noticed possible speed improvement when porting segmentation example to code-generator. According to the docs, examples, helper functions, and tests, we are calling `model.train()` or `model.eval()` inside...
## 🚀 Feature Hi @vfdev-5 , In the current implementation of MONAI handlers, they are deeply coupled with ignite engine. Some users that use different workflows can't leverage our handlers,...
Lately I experienced an issue with model checkpointing, so I wanted to move it to a discussion although I am unsure about this is a "bug", and thus I opened...