BayesFlow
BayesFlow copied to clipboard
Bug: Adapter lacks evaluation mode for stateful transforms
The .standardize() transform currently breaks net.sample in the starter notebook, as it tries to update its parameters and the conditions have a variance of zero.
More generally, it is undesirable for the transforms to change after training, as this would lead to changing results with repeated evaluations.
What would be the best design to implement this? Optimally, the transform would only change when stage="training", similar to batch norm and other stateful layers. @LarsKue, do you already have any thoughts/plans for this?
Looks like this was an oversight when the momentum was added to standardization.
A stage argument would be the standard way to solve this, except we don't have access to this inside the Dataset where the Adapter is most importantly used. @stefanradev93 do you have an idea how we can support momentum and stage differentiation?
Yup, generally, this needs a slight re-design, which will come together with inferring the batch_size / shape. For now, you can avoid the bug by setting momentum=None.