NiMARE
NiMARE copied to clipboard
Implement fail-safe publication bias method
Summary
The fail-safe method, described in Acar et al. (2018), appears to work by incrementally adding null experiments to an ALE meta-analysis and determining the point at which significant results disappear. The null experiments seem to draw coordinates randomly from across the brain mask (much like the Monte Carlo FWE correction procedure). I haven't read the paper yet, but @ldhill is looking to try it out for a manuscript, and it seems like something we could implement as a Transformer in NiMARE.
Additional details
There appear to be R implementations in NeuroStat/FailSafeN and NeuroStat/GenerateNull (as referenced in https://github.com/neurostuff/NiMARE/issues/294#issuecomment-997286107).
This is related to #96.
Here is my attempt of a summary of the method (most of the work done by @ldhill):
- Run the meta-analysis, with multiple comparisons correction.
- Identify significant clusters in the meta-analysis.
- Select a minimum number of null studies (m) and a maximum number (M).
- The minimum number is typically 2x the number of studies in the meta-analysis.
- The maximum number is typically 10x the number of studies in the meta-analysis.
- Create a simulated dataset with the minimum number of null studies (m) added and run a meta-analysis on the simulated dataset.
- If every cluster from the original meta-analysis is no longer significant, stop the procedure and note that "results may not be robust to bias due to missing (null) studies in the meta-analysis."
- If any cluster from the original meta-analysis is still significant, continue the procedure.
- Create a simulated dataset with the maximum number of null studies (M) and run a meta-analysis on the simulated dataset.
- If every cluster is no longer significant, continue to the next step.
- If every cluster is still significant, either note that in the manuscript or repeat the procedure with a new m set at the old M and a new M set somewhere higher.
- Start selecting null study counts based on m and M to scour the space.
- Start with (M + m) / 2.
Things we still need to figure out:
- I'm not sure how to perform this with multiple clusters.
- How is cluster significance determined?
- Is there a threshold for the proportion of voxels in the original cluster that need to be significant?
- Just use the center of mass of the clusters?
- Dice similarity index of any overlapping clusters?