Pierre Ablin
Pierre Ablin
Hi, This is just a random idea: it would be neat if a .py file that plots the figure of the output of the benchmark was also generated, a file...
It would be nice to have a way for the user to pass its own optimal function value, so that it is not always taken as the minimum over all...
This PR adds a 'll_reject' parameter. During the iterations, each sample with a negative log likelihood > ll_reject is going to be rejected. This helps finding artifacts.
This adds a docstring to the `SinkhornOutput` class, which is often exposed to the user.
This adds a distillation experiment, where the inner problem is training a nn on some samples; those samples are the outer variable, and the outer problem is validation on mnist....