ignite
ignite copied to clipboard
IterationMetric discussion
🚀 Feature
I'd like to discuss a bit about your PR on IterationMetric (https://github.com/Project-MONAI/MONAI/pull/1488).
As far as I understand, the idea is to compute metric value on each iteration and "all gather" everything on epoch ended ?
However, final list contains all metric values for a single epoch: [m1, m2, ..., mN]
.
@Nic-Ma, let's discuss here then.
Hi @vfdev-5 ,
Thanks for raising the discussion here.
IterationMetric
is the base class for other metrics, every metric will make a list.
I don't quite understand your question, could you please share more details about where is wrong?
Thanks.
Hi @Nic-Ma , it was not a real question but just my understanding of the feature you introduced...
I can understand to use-case but I'm hesitating a little about storing all values in RAM without a high memory consumption...
I agree that things may depend on what kind of processing is intended after gathering all values: [m1, m2, ..., mN]
.
In the simplest case, we can think that we just write each value to the disk in a separate file. In a more complicated, all values into a single file (either appending each time or once on event completed).
@sdesrozis what are your thoughts about that ?
PS: BTW, we have also https://github.com/pytorch/ignite/blob/702d3a2eed412118651004d6c7c03739ac33dfd4/ignite/contrib/handlers/stores.py#L6
which stores any output data during the epoch (and does not all_gather)
Hi @vfdev-5 ,
Here is the following PR to consume the metrics output: https://github.com/Project-MONAI/MONAI/pull/1497
It's still under review.
The idea is to save metric details into engine.state
, then other handlers can save it to file or visualize somewhere.
Thanks.
Thanks for the link @Nic-Ma !
@Nic-Ma In your implementation, the IterationMetric
could be used without inheritance, right ?
def my_algorithm(y, y_pred):
# ... do something
my_metric = IterationMetric(metric_fn=my_algorithm,
output_transform=output_transform,
device=device)
Somehow the functional and algorithmic parts are split.
I think it is a good idea to reuse existing metrics.
Hi @sdesrozis ,
Yes, that's the expected behavior, you can use an algorithm(function or callable class) as arg or define subclass.
Thanks.
@sdesrozis I think we can try to implement this as new SampleWise
metric usage. Would you like to work on that ?
Maybe, it could also open a discussion on how to better generalize MetricUsage structure and cover the issue with RunningAverage and Frequency metrics...