MixSIAR
MixSIAR copied to clipboard
se_dLOOic not being calculated and Warning Errors for relative effective samples and Pareto K diag.
Ive run my models and when I get to the comparison table, it generates a table, but also says
There were warnings (use warnings() to see them)
and these are the warning messages:
Relative effective sample sizes ('r_eff' argument) not specified.
For models fit with MCMC, the reported PSIS effective sample sizes and
MCSE estimates will be over-optimistic.
Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
Ive run my candidate list of models in the normal, then when 2 or 3 don't converge, I re run those in long or very long and then add those back in to the comparison table. But I don't think that has to deal with the generated warnings or the lack of SE_dLOOic calculations. Also, I have 12 models. Why are there 17 warnings generated? So are these warnings for a specific model only? Can we tell which model or do I have to compare them one at a time to figure out if I get warnings or not?
I have found a few threads related to the warnings, but not clearly to the lack of SE. Im assuming it has something to do with what @ericward-noaa says in this comment: https://github.com/brianstock/MixSIAR/issues/187
The weights that compare_models calculates should be interpreted very coarsely and not probabilistically. They're derived the same way that delta-AIC weights are,
But not sure why knowing they are derived the same way AIC weights are matters in explaining the need for coarse interpretations. The code for the SE is https://github.com/brianstock/MixSIAR/blob/2b1b2545638fcfaacbcaaa695e2bec436a91806f/R/compare_models.R#L130
https://discourse.mc-stan.org/t/getting-warning-with-loo-package-regarding-r-eff/3937 https://discourse.mc-stan.org/t/practical-implications-of-many-high-pareto-k-observations-loo/17905/3
When I read these my eyes go cross and I get set back to square one and cant see how my data is falling into this misspecified problem or sample size issues. I can see a sample size issue possibly with a continuous variable, but not all my models have continuous variables.
My table results from compare_models
Model | Covariate(s) | LOOica | se_LOOicb | dLOOicc | se_dLOOicd | weighte |
---|---|---|---|---|---|---|
10 | Treatment by size | 486 | 40.4 | 0 | NA | 0.989 |
6 | Treatment by Location | 495.2 | 42.5 | 9.2 | NA | 0.01 |
12 | Bi-Weekly | 499.1 | 41.6 | 13.1 | NA | 0.001 |
7 | size by Location | 504.7 | 37.7 | 18.7 | NA | 0 |
3 | Location | 511.8 | 36.9 | 25.8 | NA | 0 |
11 | Month | 511.8 | 44.6 | 25.8 | NA | 0 |
5 | Treatment | 513.3 | 48.9 | 27.3 | NA | 0 |
8 | Treatment by Sex | 514.9 | 49 | 28.9 | NA | 0 |
9 | size | 554.5 | 42.4 | 68.5 | NA | 0 |
Hello,
I wouldn't worry about that warning.
Re: NA se_dLOOic calculations, sorry but without looking at your model output I can't tell. Line 134 of compare_models
calculates the se_dLOOic for all combinations, so that is probably why you get more warnings than models. To figure out where the issue is you can run through the compare_models
code line by line.