BrainSpace
BrainSpace copied to clipboard
How can I obtain the lambda values corresponding to the aligned gradient map
In other words, after alignment, the order of the original gradients may change, and the order of the corresponding lambda values may also change. How can I obtain this information?
Hi, I have the same confusion. Look forward to your reply.
I have the same confusion~
Hi, folks
Apologies for the late reply
The lambdas we obtain with GradientsMaps correspond to the original gradients (the order of the lambdas does not change). After alignment, there is no correspondence between the lambdas and the aligned gradients. And, I don't know if there is a way to obtain some proxy lambdas for the aligned gradients
For more context: Say $G \in \mathcal{R^{p\times n}}$ is our original gradients matrix ($n$ different gradients and $p$ vertices, for example). With procrustes alignment, we compute the procrustes rotation matrix $\Theta\in\mathcal{R^{n\times n}}$ that would align $G$ to some reference gradients. Once we have the rotation matrix, the aligned gradients are calculated like this: $A = G\Theta$. Simply put, each one of the aligned gradients is a combination of the original gradients: $A_1 = G_1\Theta_{1,1} + G_2\Theta_{1,2} + \dots$
With this, the correspondence between the aligned gradients and the lambdas is lost.
There are some simple scenarios such as the one below (e.g., the rotation simply changes the positions of gradients 1 and 2) where it is easy to know the correspondence, but this is very unlikely with gradients.
$$\Theta = \begin{bmatrix} 0 & 1 & 0 \ 1 & 0 & 0 \ 0 & 0 & 1 \ \end{bmatrix} $$
Let's leave this open to see if we can get feedback from other users...
wow, thank for your kindly help~ I noticed that there are some one used the transform matrix to adjust the sequence of lambdas. So, in fact, these method is not proper for the correpondence? for example: Line 74 in https://github.com/mingruixia/MDD_ConnectomeGradient/blob/main/0_GradientCalculation/a_analysis_pipeline.m
If I get it right, the code is using the absolute values of the procrustes rotation to find the correspondence between the original gradients and the aligned ones based on the largest values.
Above, we had that $A_1 = G_1\Theta_{1,1} + G_2\Theta_{1,2} + \dots$ Let's say we only use 2 gradients. With this approach then we have that $A_1$ corresponds to $G_2$ if $|\Theta_{1,2}| > |\Theta_{1,1}|$
We are just considering correspondence based on the largest values, but I think it's a good approximation
Thank you so much~
@OualidBenkarim thank you for the above explanation of the lambdas! I came here to ask a different question and now i realize i have more to think about. I'm simply (i thought) trying to extract the percent variance explained for the principal gradient.
This is the code I'm using to extract the default first 10 gradients:
gm = GradientMaps( 'kernel', 'cosine', 'approach', 'dm', 'alignment', 'procrustes' );
gm = gm.fit( my_FC_matrix, 'reference', my_template.gradients{1} );
In the output, gm.gradients and gm.aligned have 10 gradients as expected, but gm.lambda has 17 values when I would expect there to be 10 as well. This, as well as the above discussion about losing lambda correspondence with alignment, has me questioning whether it's appropriate to simply use the first lambda to get the explained variance i'm after. I do observe that the lambdas are always decreasing and that the first aligned gradient is always clearly the principal FC gradient, but still questioning it based on these other factors.
Thanks!
Following up to say that searching through the docs more and experimenting a bit answered most of my question:
- "The lambda property stores the variance explained (for PCA) or the eigenvalues (for LE and DM). Note that all computed lambdas are provided even if this is more than the number of requested components." [per this page]
- Also, i ran with setting 'n_components' to 17, and got an error that i had asked for more components than could be computed, and this time 17 components and 17 lambdas were produced. So i guess 17 is the max for all subjects in my data set? which is odd bc there are lots of papers that seem to have used brainspace to pull far more components than that, but admittedly i haven't spent a ton of time tryign to figure out what they may be doing differently.
i'm happy to have a slightly better understanding now, but still wondering how 17 is the max and also reiterating @gourdchen 's question about variance explained for the aligned gradients... the solution described above seems like quite a rough approximation as you mention, and i'd expect that there would have to be some way to use the rotation matrix with the full set of gradients and lambdas, but i'm not super confident that i can figure out that math.
Hi @annchenknodt ,
About your last question, I think the approach described above is a good approximation. Can't think of any other method that would do a better job
As for the n_components, it depends on the size of your matrix. For a $n\times n$ matrix, you can extract a max of $n$ components/lambdas
@OualidBenkarim thank you! i've come to the conclusion that papers that report variance explained must just be reporting it for the pre-rotation gradients...
re n_components, my matrix is 360x360 (i'm using the HCP-MPP Glasser parcellation), so that's why i'm a bit miffed that I'm getting 17 components. I had tried setting it to 200 (i think i accidentally said 17 in my prev post), and got an error that I requested too many, which still doesn't make sense.
Hi @annchenknodt ,
I'm going to assume you're using the MATLAB implementation since you posted a .m
file before. The diffusion mapping code actually sets the maximum number of components to the square root of the matrix size.
Truth be told I'm not entirely sure anymore why this is in here (despite writing it myself :-) ), and I don't see why this couldn't just return the full set (@OualidBenkarim?).
Ah! Yes i'm using the matlab version. Sqrt(360) = 18.97 - close enough to make much more sense, thank you!! i'm glad i'm not the only one who doesn't always remember why something is in my code :-)