Nicolas
Nicolas
> Thanks for the PR! There's a concern of breaking backward compatibility for those who have custom models that have overridden `compute_metrics()`, so unless `model.train_step()` or `model.test_step()` inspect the signature...
The training information during metric computation can be useful for several reasons: - The metrics are expensive and one would like to disable/sub-sample them on train. - The training and...
> The API change looks good to me. I don't quite understand the factoring though. This PR aims to grant users a better control over the metrics computation, it is...
> Can you try implementing that in the PR? Maybe that will be a nice improvement. Ok, I will try.
Hi @bhack, can you have a look at this?
> @nicolaspi - Keras recently introduced `get_metrics_result` API for `Model` that addresses part of this issue / PR. Can you please rebase and only address the `training` parameter added to...
Hi @fchollet, @rchao, Any ETA for merging this? Thanks
> Hello @nicolaspi, my apologies, but we've encountered issues when attempting to merge this internally, because it's breaking the backward compatibility and thus numerous tests (there are many subclassed models...
@axch I made changes in the code you authored, could you kindly have a look at this PR? Thanks
Thanks for your feedback! > * Do we need the dependency on tf.experimental.numpy? We need specifically the `take_along_axis` function that allow to gather the slices along each batch dimensions. I...