nnest
nnest copied to clipboard
emcee proposal
Hi Adam,
I played with three modifications to the MCMC sampling:
- Iteratively train the network, sample, train, etc.
- I store multiple networks during the training process. Then I sample proportional to their sampling efficiency. This avoids using a seemly good but actually overfitted best network.
- Use emcee as a mcmc sampler. The population proposal samples a Gaussian ball (which we are training for) in moderate dimensions more efficiently than a Gaussian metropolis-hastings proposal.
For Rosenbrock it seems --num_blocks=5 --hidden_dim=40 --num_layers=1 works reliably in 2-20 dimensions. After a few sample-train iterations, the sampler becomes much more efficient than standard emcee (tested by setting num_blocks=1 num_layers=0).
For Himmelblau the sampler tends to lose modes. I think this is because I restart the sampler from scratch, but I should initialise it with the last sampler population (the reshape in my sample() flattens everything, I haven't figured out how to just return as is).
Later I want to look at nested sampling.
Cheers, Johannes