why sampler.get_autocorr_time() gives nans!
trying to estimate 6 paramaters, nwalkers=32, nsteps=1e6
tau = sampler.get_autocorr_time() print("taus", tau) gives: taus [nan nan 1. nan 1. 1.]?
Hard to diagnose from such limited info. Are you sure that you don't have nans in your chains?
no nans, no +-inf in chains! getting these warnings...
autocorr.py:36: RuntimeWarning: invalid value encountered in true_divide acf /= acf[0] autocorr.py:41: RuntimeWarning: invalid value encountered in less m = np.arange(len(taus)) < c * taus 99: RuntimeWarning: invalid value encountered in greater flag = tol * tau_est > n_t
That suggests that there's some sort of under or overflow happening. Take a look at your chains to see if any parameters are especially large or small and the post the specific chain that is failing somewhere and I'll take a look at it.
On Thu, Apr 16, 2020 at 4:24 PM Sultan Hassan [email protected] wrote:
no nans, no +-inf in chains! getting these warnings...
autocorr.py:36: RuntimeWarning: invalid value encountered in true_divide acf /= acf[0] autocorr.py:41: RuntimeWarning: invalid value encountered in less m = np.arange(len(taus)) < c * taus 99: RuntimeWarning: invalid value encountered in greater flag = tol * tau_est > n_t
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/dfm/emcee/issues/342#issuecomment-614875947, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACVQSTH2J66DOY3TDYPYGTRM5SRRANCNFSM4MKAZ5TA .
-- Dan Foreman-Mackey Associate Research Scientist Flatiron Institute http://dfm.io
thanks Dan, seems like no real exploration of the parameter space is happening: the range (min , max) of the six parameters from the chain: 0 -0.005010458610130385 -0.004670719405149635 1 38.04524868811062 38.045567812313415 2 7.545941815153315 7.546350085425933 3 0.1797592064878929 0.1802134523596176 4 3.4671571905145857 3.46753115770362 5 0.516954651642297 0.51744226666063 I am trying to understand why this is the case, and don't know why the first parameter is negative while I am specifying prior range from 0.01 to 1 to it. Anyway will explore more, I wanted to ask: I am running nwalkers=1e3, and nsteps=1e5 which should be enough for 6 params, but now getting the warning below, so what is the good number of steps and walkers to use given the number of params? thanks again
WARNING:root:The chain is shorter than 50 times the integrated autocorrelation time for 6 parameter(s). Use this estimate with caution and run a longer chain! N/50 = 200; tau: [500.34525389 569.51911428 601.98316294 655.31369947 573.76753242 646.74954819]
Hi, I'm having the same issue right now--My autocorrelation times are coming out as nans sometimes. I noticed that where my log_prob = -inf, the draw has unreasonable parameters. Specifically, one of my parameters is negative when I've specified it should be 0 < parameter < 3 in the prior. My prior function is set up as in this tutorial, so I expect that if a parameter is outside my specified range it will return -np.inf, but this is also all working for me in another notebook, so I'm not sure what's different/maybe not completely understanding the problem. The parameter estimates it gives me at the end of the MCMC are actually pretty on point, despite the nans in the autocorrelation time.
Wondering if there was ever a resolution for this issue? Thank you!
@laldoroty — if you're finding parameters out of bounds in your final chain that normally means that you've initialized walkers outside of the bounds. It's worth adding a check to make sure that you're not!
Yes, I've definitely got parameters initialized out of bounds if I look at the State() object, but I thought the skip_initial_state_check=False argument in run_mcmc() would prevent that?
UPDATE: I had a '+' sign where a '*' should have been when initializing walkers, so that's why it was initializing so far out of bounds... lol
If you initialize out of bounds, those walkers typically won't ever move into bounds so those chains will cause many problems (this one included :D) so I'm glad that you identified the issue!
The skip_initial_state_check flag bypasses a check for a different common initialization issue and you should only turn that on if you have a really strong argument that you should.