Creating an embedding net with batchnorm 1D fails due to check of embedding net outputsize
When initializing and building the initial flow, these statements can raise exceptions, https://github.com/mackelab/sbi/blob/0fcf9842c9515fc089ea464fb11359d4c19c7331/sbi/neural_nets/flow.py#L59
When an embedding net with batchnorm1D is used, batchnorm expects a batch of inputs. Is it not better to see the output size of the layer by doing:
embedding_net.model[-1].out_features ?
Hey,
Thanks for reporting this limitation. Your suggestion however won't work in general as not all PyTorch Modules have the attribute out_features i.e. activation functions.
There should be two ways to fix this:
- Giving the embedding net a batch of at least two datapoints.
- Switch to evaluation mode for this operation i.e.
emebdding_net.eval().
If you pass the embedding net in evaluation mode to the flow builder, it should also work, no? Later during training it will automatically be switched to train.
Kind regards, Manuel
Hi Manuel,
Your suggestions also work. I just wanted to point out that this happens during the initialization phase, density_estimator_function = posterior_nn(model="maf", embedding_net=embedding_net, hidden_features=num_hidden, num_transforms=number_of_transforms) which then calls build_maf, which then does y_numel = embedding_net(batch_y[:1]).numel().
I can try to take this