bmi
bmi copied to clipboard
Mutual information estimators and benchmark
- [x] Use the NeurIPS template. - [x] Cite the variational estimators literature (Poole et al., Song and Ermon, McAllester and Stratos) - [x] Benchmark with tasks created using the...
This PR is, as proposed by @grfrederic, about updating naming conventions. Tasks: - [ ] Change the import in the package. (`from ... import ... as bmm` rather than `from...
1. Include info about available estimators 2. Show basic examples of running a given estimator on a given task
In our samplers `.mutual_information()` is a callable method, in tasks it is a `@property`. It would be nice for them to be consistent.
There's room for improvement for documentation: - [ ] Add a picture of the benchmark to the ReadMe/docs. - [x] Explicitly list estimators and cite their resources (See #135). Some...
We've made a lot of changes when moving to our new tasks/benchmark API. It would be nice to rethink which tasks/estimators etc should be exported by default. For example: 1....
MINE, InfoNCE, and other neural estimators could use random state to use dropout. This requires changes in the training loop and some refactoring.
Add the difference of [cross-entropy estimator](https://arxiv.org/abs/1811.04251).
Add some tasks to the benchmark based on the [fine distributions framework](https://arxiv.org/pdf/2310.10240.pdf). For example, tasks involving outliers and discrete-continuous distributions. Note that some more thinking is needed what exactly tasks...
Introduce principled versioning of the benchmark, using GitHub releases. Additional changes: - Version number in Python code or ReadMe? - Draft the v1.0 release, when it's done.