enformer-pytorch icon indicating copy to clipboard operation
enformer-pytorch copied to clipboard

Correlation Coefficient Calculation

Open adamcatto opened this issue 1 year ago • 4 comments
trafficstars

I ran the test_pretrained.py script to calculate the correlation coefficient on a validation sample, and got 0.5963 as expected. However, when I inspected the target and predictions, the shapes were each (896, 5313), i.e. missing the batch dimension. The pearson_corr_coef function computes similarity over dim=1, so the calculated number 0.5963 is actually a measure of correlation over the different cell lines, rather than over the track positions per cell line. When you unsqueeze the batch dimension, then the correlation is calculated over track positions, and yields a value of 0.4721. This is the way that Enformer reports correlation, so does it make sense to update the README and test_pretrained.py with this procedure? Also, were the reported correlation coefficients 0.625 and 0.65 on the train/test sets calculated on samples with missing batch dimension? If so, a recalculation would be necessary. Am I missing something?

Here is the modified test_pretrained.py script I have used:

import torch
from enformer_pytorch import Enformer

enformer = Enformer.from_pretrained('EleutherAI/enformer-official-rough').cuda()
enformer.eval()

data = torch.load('./data/test-sample.pt')
seq, target = data['sequence'].cuda(), data['target'].cuda()
print(seq.shape) # torch.Size([131072, 4])
print(target.shape) # torch.Size([896, 5313])
seq = seq.unsqueeze(0)
target = target.unsqueeze(0)

# Note: you will find prediction shape is also `torch.Size([896, 5313])`.

with torch.no_grad():
    corr_coef = enformer(
        seq,
        target = target,
        return_corr_coef = True,
        head = 'human'
    )

print(corr_coef) # tensor([0.4721], device='cuda:0')
assert corr_coef > 0.1

adamcatto avatar Jan 22 '24 21:01 adamcatto

Hi Adam! Forgive my slow reply. Please dig into the notebook https://github.com/lucidrains/enformer-pytorch/blob/main/evaluate_enformer_pytorch_correlation.ipynb to see how I got to those numbers. I did not use the function you're using for computing correlation.

jstjohn avatar Feb 29 '24 14:02 jstjohn

So now to your question, how is correlation calculated in forward? I didn't write that part. If you look at the code in forward, and you pass one hot sequences without batch, they will get batch added:

    no_batch = x.ndim == 2

    if no_batch:
        x = rearrange(x, '... -> () ...')

Now the other part is more interesting. I don't see batch added to the target (looking at the code on my iPhone). You'd have to look at how the correlation computation function is called in forward of Enformer. Maybe I missed the line where that happens, or maybe it works without doing that?

jstjohn avatar Feb 29 '24 14:02 jstjohn

Here's the code for the correlation function:

def pearson_corr_coef(x, y, dim = 1, reduce_dims = (-1,)): x_centered = x - x.mean(dim = dim, keepdim = True) y_centered = y - y.mean(dim = dim, keepdim = True) return F.cosine_similarity(x_centered, y_centered, dim = dim).mean(dim = reduce_dims)

So dim=1 in this case points to the last dimension in the target since it doesn't have the batch on it, I think, but the first dimension in the prediction?

Seems worth digging into! Keep in mind this affects the sanity check return but not how correlation was verified. Again see the notebook I posted which calculates this independently.

jstjohn avatar Feb 29 '24 14:02 jstjohn