Mengyin Lu

Results 13 comments of Mengyin Lu

Yes the general branch's results look very similar to my old results! Anyway in my current simulations df1 and df2 are always the same for all observations, so it is...

1. Yes I got errors sometimes with the old "gradient" function, replacing it by my new gradient function fixed the problem divide by zero and didn't affect results in general....

I see..say, use the values where the cdf is almost 1 (or 0) to decide the "plausible" range?

I think the argument corresponding to "+uniform" or "halfuniform" should be "mixcompdist=..." instead of "prior=..." This is indeed confusing. The argument "prior" actually means if we add more weight on...

@jhsiao999 Could you send me your data so I can check it and debug? Thanks!

Introducing ash_XXX makes more sense to me. Now we have to put betahat=0 and sebetahat=1 for non-normal/t likelihoods, which seems really confusing...

@jhsiao999 I also got the warnings "REBayes::KWDual(A, rep(1, k), normalize(w), control = list(rtol = 0.1)) : estimated mixing distribution has some negative values:consider reducing rtol" I check the data but...

@pcarbo Thanks for the explanation! Maybe in this case we can use optmethod="mixEM" instead? For this example the fitted proportions of "mixIP" (after setting negative proportions to 0) and "mixEM"...

Sorry i should make rtol smaller... now I try rtol=1e-8 and even smaller values (1e-9, 1e-10...), but the results don't change. Anyway mixip's loglikelihood is still lower than that of...

I agree! Having betahat/sebetahat for the general likelihoods indeed looks confusing. It will be much more clear if we can just avoid them.