NiMARE icon indicating copy to clipboard operation
NiMARE copied to clipboard

Implement SDM meta-analysis techniques

Open JohannesWiesner opened this issue 6 years ago • 3 comments

I would like to use nimare to perform a meta-analysis. Is there a plan to implement the SDM algorithms (such as SDM-PSI, the latest version) as alternatives to ALE and MKDA?

JohannesWiesner avatar Nov 04 '19 08:11 JohannesWiesner

We don't have immediate plans to implement all of SDM, as it's closed-source and none of the core developers of NiMARE are very familiar with the details of any of the algorithms. However, I did speak with @HBossier fairly recently, and he has an R-based implementation of at least one of the SDM algorithms that we were interested in translating over to Python and incorporating into NiMARE. I haven't had a chance to work on this yet, but maybe it's something we can work on adding.

tsalo avatar Nov 04 '19 16:11 tsalo

Hi @JohannesWiesner,

I didn't find time yet to start working on this as I had to shift stuff on my to-do priority list...

These functions might also be useful if you want to work within R.

HBossier avatar Jan 21 '20 10:01 HBossier

I'm going to use this comment to collect information I glean from SDM manuscripts.

The steps, as best I can understand them, of the SDM-PSI algorithm are:

  1. Use anisotropic Gaussian kernels, plus effect size estimates and metadata, to produce lower-bound and upper-bound effect size maps from the coordinates.
    • We need generic inter-voxel correlation maps for this.
    • We also need a fast implementation of Dijkstra's algorithm to estimate the shortest path (i.e., "virtual distance") between two voxels based on the map of correlations between each voxel and its neighbors. I think dijkstra3d might be useful here.
  2. Use maximum likelihood estimation to estimate the most likely effect size and variance maps across studies (i.e., a meta-analytic map).
  3. Use the MLE maps and each study's upper and lower-bound effect size maps to impute study-wise effect size and variance images that meet specific requirements.
  4. For each imputed pair of effect size and variance images, simulate subject-level images.
    • The mean across subject-level images, for each voxel, must equal the value from the study-level effect size map.
    • Values for each voxel, across subjects, must correlate with the values for the same voxel at 1 in all other imputations.
    • Values of adjacent voxels must show "realistic" correlations as well. SDM uses tissue-type masks for this.
    • SDM simplifies the simulation process by creating a single "preliminary" set of subject-level maps for each dataset (across imputations), and scaling it across imputations.
  5. Permutations. "The permutation algorithms are general."
  6. Combine subject-level images into study-level Hedge-corrected effect size images.
  7. Perform meta-analysis across study-level effect size maps using random effects model. Performed separately for each imputation.
    • One of our IBMA interfaces should be able to do this. Either DerSimonianLaird or VarianceBasedLikelihood.
  8. Compute imputation-wise heterogeneity statistics.
  9. Use "Rubin's rules" to combine heterogeneity statistics, coefficients, and variance for each imputed dataset.
  10. Perform Monte Carlo-like maximum statistic procedure to get null distributions for vFWE or cFWE. Or do TFCE.

Misc. notes:

  • The "permuted subject images" (PSI) method added in version 6.21 and described in Albajes-Eizagirre et al. (2019) should be the same as nilearn's permuted_ols() function (i.e., the Freedman-Lane algorithm). We already use that function for one of our IBMA algorithms, and it's very easy to apply to other kinds of maps.
    • Okay, not the whole method, but at least for a meta-regression I think permuted_ols() would be the way to go.

tsalo avatar Nov 14 '20 03:11 tsalo