ignite
ignite copied to clipboard
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
Idea is to enhance `Engine` to be able to run on multiple data sources: ```python data1 = ... data2 = ... data3 = ... ... def process_function(engine, batches): batch1 =...
## 📚 Documentation We have now https://pytorch-ignite.ai/ with a lot of helpful resources, examples, tutorials etc. We need to update the content of - https://pytorch.org/ignite/ : mention the site and...
Following the discussion from https://github.com/pytorch/ignite/issues/466#issuecomment-478339986 it would be nice to have such metric in Ignite. In the context of a multilabel task, compute a top-k precision/recall per label (treating all...
## 🚀 Feature Idea is to simplify user's learning curve and provide a template example that can be copied by the user and modified to his/her needs. Something like that...
When I tried calculating the time taken to complete a single epoch via `Timer`, the handlers attached to `trainer` before `Timer` were executed first, and thus their time also got...
## 🚀 Feature As discussed in #1916, a [new profiler tool](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool) was recently introduced and it would be nice to have a specific handler in ignite.
## 🚀 Feature `StateParamScheduler`has introduced an `attach` method. Following this comment https://github.com/pytorch/ignite/pull/2090#discussion_r717412539 , maybe this could also be introduced in the optimizer's parameter scheduler (`ParamScheduler`) ? It will be necessary...
## 🚀 Feature The idea is to replace where it is appropriate `torch.no_grad` with `inference_mode` to speed-up computations: evalutation, metrics. This works since pytorch v1.9.0. - https://pytorch.org/docs/1.9.0/notes/autograd.html#inference-mode - https://pytorch.org/docs/1.9.0/generated/torch.inference_mode.html?highlight=inference_mode#torch.inference_mode Let's...
## 🚀 Feature Feature request to support the case like : ```python common.setup_common_training_handlers( trainer=trainer, ... lr_scheduler=[lr_scheduler1, lr_scheduler2], ) ``` cc @DhDeepLIT
## 🚀 Feature Since this [commit](https://github.com/pytorch/ignite/commit/002b595daa8a8345286c5e096c33e278948686a7), the metrics have been disabled with xla device due to a performance regression. https://github.com/pytorch/ignite/blob/7bdf92322a15ba0bea71ad76f3f785b7ea5907c3/ignite/metrics/metric.py#L224-L225 To solve this, it would be interesting to track which...