Ensembles of Approximators
This draft-PR is the result of discussions with @elseml and @stefanradev93.
The goal is fast and convenient support of approximator ensembles and the first steps for this are taken already.
-
We envision
ApproximatorEnsembleas the abstraction at the heart of future workflows using ensembles.- fundamentally, it is a wrapper of a dictionary of arbitrary
Approximatorobjects. - it overwrites the central methods compute_metrics, build, sample and passes inputs on to the respective ensemble member's methods.
- fundamentally, it is a wrapper of a dictionary of arbitrary
-
Since ensembles should cover the sensitivity wrt all randomness in approximators, which are not just initialization, but also the random order of training batches, we need slightly modified datasets.
- For now only
OfflineEnsembleDatasetis implemented, which makes sure that training batches have an additional dimension at the second axis, containing multiple independent random slices of the available offline samples.
- For now only
A few things are missing, among them are
- [x] predict/estimate methods for
ApproximatorEnsemble(currentlysampleexists) - [x] tests for
ApproximatorEnsemble - [ ] doc strings for
ApproximatorEnsemble - [ ]
OnlineEnsembleDataset - [ ]
DiskEnsembleDataset - [x] tests for ensemble datasets
- [ ] some Workflow
You can check out the example notebook here: https://github.com/bayesflow-org/bayesflow/blob/ensembles/examples/ApproximatorEnsemble%20example.ipynb
Codecov Report
:x: Patch coverage is 98.90110% with 1 line in your changes missing coverage. Please review.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| bayesflow/approximators/approximator_ensemble.py | 98.61% | 1 Missing :warning: |
| Files with missing lines | Coverage Δ | |
|---|---|---|
| bayesflow/__init__.py | 49.05% <100.00%> (ø) |
|
| bayesflow/approximators/__init__.py | 100.00% <100.00%> (ø) |
|
| bayesflow/approximators/approximator.py | 79.54% <100.00%> (-0.46%) |
:arrow_down: |
| ...low/approximators/model_comparison_approximator.py | 85.79% <100.00%> (+0.59%) |
:arrow_up: |
| bayesflow/datasets/__init__.py | 100.00% <100.00%> (ø) |
|
| bayesflow/datasets/offline_ensemble_dataset.py | 100.00% <100.00%> (ø) |
|
| bayesflow/approximators/approximator_ensemble.py | 98.61% <98.61%> (ø) |
Good job Hans! Fyi, commit 955ac79 uncovered a bug in ModelComparisonApproximator's build_from_data method which d8d84c8 addresses.
Nice work :) While trying it out a bit, the additional dimension for each ensemble member in the data caught me by surprise, and it took me a while to figure out why a dimension was missing from my data during training. I'm not sure what the best interface would be here, but I think it would be good to think about the design for a bit. Maybe the following questions will get us closer to what we want to have:
- Should the ordinary dataset classes be supported? If yes, which mode should they operate in (does it need a warning?), if no, we might want to override
fitto check for them and raise an error if they are passed. - Should there be multiple "modes" for the
...EnsembleDatasets, i.e., identical data vs. different data? Might be especially relevant for online training, as different data increases the required compute. - How do we handle custom dataset classes, what shapes do we expect from them?
As I did not take part in the discussions, maybe you already talked this through. In any way, I would be happy to hear your thoughts on this...
I have added serialization support, but deserialization fails when multiple approximators use the same weights, e.g. when they share a summary network. I'm not sure yet how this can be resolved, and if we want to enable serialization if we can not resolve it...
@han-ol could you perhaps provide a minimal code example for how the ensembles should work? On that basis, it might be then also easier to discuss interface questions, including those of @vpratz.
@paul-buerkner You can find one as part of the PR: https://github.com/bayesflow-org/bayesflow/blob/ensembles/examples/ApproximatorEnsemble%20example.ipynb
@han-ol and I had a discussion on the dataset question. One approach would be a general EnsembleDatasetWrapper with the following properties:
- it takes in an arbitrary
dataset, e.g. an instance ofOnlineDatasetorOfflineDataset - it determines the batch size either by reading
dataset.batch_sizeor by sampling a batch from the dataset - it has a parameter like
unique_data_fractionto control if all ensemble members get the same data (0.0) or every member gets different data (1.0). For in between values, a bootstrap procedure can be used. - the required number of simulations can be obtained by repeatedly sampling batches from
dataset, or, for our own simulators, by changing thedataset.batch_sizeparameter. The latter would be a bit hacky, we would have to see if we want this.
In addition, in the approximator ensemble class, we can determine by the shape of inference_variables if a standard dataset (like OnlineDataset) was used. If so, we default to showing the same data to all ensemble members.
This gives us the possibility to use our existing datasets, and only requires this one additional class to add the capability to pass different data to different approximators.
I am strongly in favor of this idea. We also discussed in the past that this is a more elegant and catch-all solution.