keepsake
keepsake copied to clipboard
Make it possible to record the results of an experiment
Why?
It is possible to record params when creating an experiment, and it is possible to record metrics when creating a checkpoint, but sometimes you need to record the overall results of an experiment.
For example, if:
- Your training process as an evaluation step at the end. It doesn't make sense to make a new "checkpoint" for this, because your model isn't changing, and the evaluation metrics might be different to the training metrics.
- Your training process doesn't have iterations (e.g. sklearn) and there aren't any "checkpoints" as such. You just want to record your params and metrics all in one go.
How
Somehow, we want to be able to record metrics at the experiment level. We need to:
- Come up for a word for it (maybe "metrics" being used in two places is confusing? "results"?)
- Design an API for it (pass them to
stop()
? pass them toinit()
?)
This is exactly what I'm looking for! Would be great to have this feature