spearmint
spearmint copied to clipboard
Non sequential usage of spearmint with slice sampling.
Hi, I've been using optimize_hypers instead of sample_hypers since the beginning because I've always had the "Slice sampler shrank to zero!" error. I didn't really investigated this problem until recently and understood what was happening.
My problem is simply because, for my application, I have to use spearmint in a non sequential way and sample_hypers is using the previously obtained values of mean, amp2 and noise to initiate the sampling. In my case, the previous values are somewhat irrelevant.
For a quick fix, I've set chooser.D = -1 before calling chooser.next. This has the effect of reinitializing the params to a "stable" state and then, the "Slice sampler shrank to zero!" error no longer appears.
So, here is my question. Does this quick fix put the slice sampling algorithm in a much less efficient regime ? I've made a quick test on one dataset and it seems to behave ok. But I don't know enough about slice sampling to understand the full impact of this modification.
Any thoughts on that ?
Thanks, Alex.
Hi Alex, that's certainly a use case that we hadn't anticipated. Yes, the chooser saves its state at each iteration and then continues sampling by 'hot starting' from the state at the last iteration. Typically it's a good idea to burn (throw out) a large number of samples to initialize a sampler. So this hot start helps save a lot of time by re-burning at each iteration. Of course if subsequent calls to the chooser don't represent consecutive iterations, you can have strange behavior (such as the mean not being in the actual range of the outputs). This would cause the sampler to crash out, yes. An easy fix is to reinitialize and reburn each time you call the chooser. You can do this by deleting the *Chooser.pkl file in the experiment directory. You may want to set the flag --method-args=burnin=<100+> to make sure that the chooser burns in for enough samples. Hope that helps,
Jasper
On Mon, Jan 27, 2014 at 5:22 PM, recursix [email protected] wrote:
Hi, I've been using optimize_hypers instead of sample_hypers since the beginning because I've always had the "Slice sampler shrank to zero!" error. I didn't really investigated this problem until recently and understood what was happening.
My problem is simply because, for my application, I have to use spearmint in a non sequential way and sample_hypers is using the previously obtained values of mean, amp2 and noise to initiate the sampling. In my case, the previous values are somewhat irrelevant.
For a quick fix, I've set chooser.D = -1 before calling chooser.next. This has the effect of reinitializing the params to a "stable" state and then, the "Slice sampler shrank to zero!" error no longer appears.
So, here is my question. Does this quick fix put the slice sampling algorithm in a much less efficient regime ? I've made a quick test on one dataset and it seems to behave ok. But I don't know enough about slice sampling to understand the full impact of this modification.
Any thoughts on that ?
Thanks, Alex.
— Reply to this email directly or view it on GitHubhttps://github.com/JasperSnoek/spearmint/issues/22 .
I've performed my original test with GPEIChooser, which don't have a burnin parameter and only save Chooser.pkl when del is called. So only setting chooser.D = -1 was enought. I will switch to GPEIOptChooser and use 100 burnin.
If I understand the code correctly, I will also have to do the following steps before calling chooser.next
chooser.D = -1 # will trigger a call to _real_init
os.remove(chooser.state_pkl) # delete chooser.pkl to prevent reloading during the call to _real_init
chooser.need_burnin = True # Makes sure the burnin step is executed
Thanks for the tip. And most importantly, thanks for supporting a non anticipated use case ;)