NiMARE
NiMARE copied to clipboard
Implement SDM meta-analysis techniques
I would like to use nimare to perform a meta-analysis. Is there a plan to implement the SDM algorithms (such as SDM-PSI, the latest version) as alternatives to ALE and MKDA?
We don't have immediate plans to implement all of SDM, as it's closed-source and none of the core developers of NiMARE are very familiar with the details of any of the algorithms. However, I did speak with @HBossier fairly recently, and he has an R-based implementation of at least one of the SDM algorithms that we were interested in translating over to Python and incorporating into NiMARE. I haven't had a chance to work on this yet, but maybe it's something we can work on adding.
Hi @JohannesWiesner,
I didn't find time yet to start working on this as I had to shift stuff on my to-do priority list...
These functions might also be useful if you want to work within R.
I'm going to use this comment to collect information I glean from SDM manuscripts.
The steps, as best I can understand them, of the SDM-PSI algorithm are:
- Use anisotropic Gaussian kernels, plus effect size estimates and metadata, to produce lower-bound and upper-bound effect size maps from the coordinates.
- We need generic inter-voxel correlation maps for this.
- We also need a fast implementation of Dijkstra's algorithm to estimate the shortest path (i.e., "virtual distance") between two voxels based on the map of correlations between each voxel and its neighbors. I think
dijkstra3dmight be useful here.
- Use maximum likelihood estimation to estimate the most likely effect size and variance maps across studies (i.e., a meta-analytic map).
- Use the MLE maps and each study's upper and lower-bound effect size maps to impute study-wise effect size and variance images that meet specific requirements.
- For each imputed pair of effect size and variance images, simulate subject-level images.
- The mean across subject-level images, for each voxel, must equal the value from the study-level effect size map.
- Values for each voxel, across subjects, must correlate with the values for the same voxel at 1 in all other imputations.
- Values of adjacent voxels must show "realistic" correlations as well. SDM uses tissue-type masks for this.
- SDM simplifies the simulation process by creating a single "preliminary" set of subject-level maps for each dataset (across imputations), and scaling it across imputations.
- Permutations. "The permutation algorithms are general."
- Combine subject-level images into study-level Hedge-corrected effect size images.
- Perform meta-analysis across study-level effect size maps using random effects model. Performed separately for each imputation.
- One of our IBMA interfaces should be able to do this. Either DerSimonianLaird or VarianceBasedLikelihood.
- Compute imputation-wise heterogeneity statistics.
- Use "Rubin's rules" to combine heterogeneity statistics, coefficients, and variance for each imputed dataset.
- Perform Monte Carlo-like maximum statistic procedure to get null distributions for vFWE or cFWE. Or do TFCE.
Misc. notes:
- The "permuted subject images" (PSI) method added in version 6.21 and described in Albajes-Eizagirre et al. (2019) should be the same as
nilearn'spermuted_ols()function (i.e., the Freedman-Lane algorithm). We already use that function for one of our IBMA algorithms, and it's very easy to apply to other kinds of maps.- Okay, not the whole method, but at least for a meta-regression I think
permuted_ols()would be the way to go.
- Okay, not the whole method, but at least for a meta-regression I think