TCRdock icon indicating copy to clipboard operation
TCRdock copied to clipboard

Decoy discrimination benchmark using finetuned model

Open junhaobearxiong opened this issue 2 years ago • 3 comments

Hi Phil!

Thank you so much for all the work into creating and curating this great resource! I've been interested in leveraging TCRdock for predicting TCR binding specificity for some peptides of interest, and I've been trying to reproduce the decoy discrimination results in the paper as a first step sanity check. The main difference from the procedure described in the paper is that I used the finetuned model and the updated protocol described in #7, namely:

  1. add the flag --new_docking when running setup_for_alphafold.py
  2. add the flags --model_names model_2_ptm_ft4 --model_params_files /path/to/tcrpmhc_run4_af_mhc_params_891.pkl when running run_prediction.py, where that .pkl file is the fine-tuned parameter set downloaded from the dropbox link.

The reasons why I used the finetuned model are that it is more efficient to run, and supposed to generate higher quality predicted structures (which, by the results of the paper, correlates with better decoy discrimination), but I'm wondering whether you would expect decoy discrimination to generally improve with the finetuned model? I'm asking because from the initial analyses I did, the results seem to be quite different from the results in the paper, so I want to double check if it is what you would expect:

  1. For the true TCR binders, the predicted score for each TCR-pMHC pair (after correcting for background TCRs for each pMHC, and mean centering for each TCR) don't have significant correlation with the wt_binding_score column in datasets_from_the_paper/table_S2_specificity_benchmark_tcrs.csv. I've limited my analyses to the 6 human peptides from the paper, and used the 50 human background TCRs in the aforementioned csv file for the pMHC-intrinsic effect correction.
  2. For each peptide, the classification performance (as measured by AUROC) improves for some peptides but decreases for others, often significantly. For this evaluation, I do want to clarify: in Figure 3E of the paper, for each peptide, is the ROC ranking the scores for the 50 positive pairs (50 TCRs x 1 true peptide) vs. 450 negative pairs (the same 50 TCRs x 9 decoys), so the positive-to-negative ratio is always 1:9? If it is the case that the evaluation set is imbalanced, is the precision-recall curve also reported somewhere?

Again, thank you so much for this great resource!

Best, Bear

junhaobearxiong avatar Nov 18 '23 19:11 junhaobearxiong

Hi Bear, Thanks for trying this out! THese are all great questions. As you can see, TCR specificity prediction is a very hard problem. Here are a few thoughts:

  • Surprisingly I have not seen any improvement in specificity prediction for the structure-fine-tuned model. If anything, it looks worse. It may be that the structure-fine-tuning process distorts the confidence metric (PAE) which is critical for specificity prediction in our approach.
  • I also am not confident that the faster modeling approach (n=1 runs vs n=3) improves specificity prediction.
  • I expect that the results from the exact calculations in the paper will be reproducible, but please let me know if that does not seem to be true in your hands. The main focus, as the github repository name suggests, was on structure prediction, but I am certainly working hard to improve the approach for specificity prediction since that would have broad applicability. But, as I said, it's a very hard problem!
  • As to the analysis, I don't think I looked at anything other than ROC for specificity.

Some of the things I am trying right now are

  • explicitly fine-tuning for specificity prediction rather than structure prediction, using the approach from this paper: https://www.pnas.org/doi/abs/10.1073/pnas.2216697120
  • testing out the newer v2.3 AlphaFold models, in particular the multimer models.
  • gathering more data for training

I am also excited to try the new version of AlphaFold (https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/a-glimpse-of-the-next-generation-of-alphafold/alphafold_latest_oct2023.pdf) if/when it is released.

Excited to hear any thoughts you have on these issues! Take care, Phil


From: Junhao (Bear) Xiong @.> Sent: Saturday, November 18, 2023 11:33 AM To: phbradley/TCRdock @.> Cc: Subscribed @.***> Subject: [phbradley/TCRdock] Decoy discrimination benchmark using finetuned model (Issue #10)

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Hi Phil!

Thank you so much for all the work into creating and curating this great resource! I've been interested in leveraging TCRdock for predicting TCR binding specificity for some peptides of interest, and I've been trying to reproduce the decoy discrimination results in the paper as a first step sanity check. The main difference from the procedure described in the paper is that I used the finetuned model and the updated protocol described in #7https://urldefense.com/v3/__https://github.com/phbradley/TCRdock/issues/7__;!!GuAItXPztq0!kpN3zH0EVL9S3uMWJ_bLtBGw__TySN7mBZ-7NePbrekCBndeeLrm5pznXCeIU_fxruejDomwpQ3UgSmVAV1Pb7k4$, namely:

  1. add the flag --new_docking when running setup_for_alphafold.py
  2. add the flags --model_names model_2_ptm_ft4 --model_params_files /path/to/tcrpmhc_run4_af_mhc_params_891.pkl when running run_prediction.py, where that .pkl file is the fine-tuned parameter set downloaded from the dropbox link.

The reasons why I used the finetuned model are that it is more efficient to run, and supposed to generate higher quality predicted structures (which, by the results of the paper, correlates with better decoy discrimination), but I'm wondering whether you would expect decoy discrimination to generally improve with the finetuned model? I'm asking because from the initial analyses I did, the results seem to be quite different from the results in the paper, so I want to double check if it is what you would expect:

  1. For the true TCR binders, the predicted score for each TCR-pMHC pair (after correcting for background TCRs for each pMHC, and mean centering for each TCR) don't have significant correlation with the wt_binding_score column in datasets_from_the_paper/table_S2_specificity_benchmark_tcrs.csv. I've limited my analyses to the 6 human peptides from the paper, and used the 50 human background TCRs in the aforementioned csv file for the pMHC-intrinsic effect correction.
  2. For each peptide, the classification performance (as measured by AUROC) improves for some peptides but decreases for others, often significantly. For this evaluation, I do want to clarify: in Figure 3E of the paper, for each peptide, is the ROC ranking the scores for the 50 positive pairs (50 TCRs x 1 true peptide) vs. 450 negative pairs (the same 50 TCRs x 9 decoys), so the positive-to-negative ratio is always 1:9? If it is the case that the evaluation set is imbalanced, is the precision-recall curve also reported somewhere?

Again, thank you so much for this great resource!

Best, Bear

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/phbradley/TCRdock/issues/10__;!!GuAItXPztq0!kpN3zH0EVL9S3uMWJ_bLtBGw__TySN7mBZ-7NePbrekCBndeeLrm5pznXCeIU_fxruejDomwpQ3UgSmVAeRas28Q$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/ABBNCHYVG6IPEO6YEALDHF3YFEEPRAVCNFSM6AAAAAA7RFRAMWVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAYDANJVGAZDONY__;!!GuAItXPztq0!kpN3zH0EVL9S3uMWJ_bLtBGw__TySN7mBZ-7NePbrekCBndeeLrm5pznXCeIU_fxruejDomwpQ3UgSmVAQ6fRPx-$. You are receiving this because you are subscribed to this thread.Message ID: @.***>

phbradley avatar Nov 19 '23 16:11 phbradley

Hi Phil!

Thank you so much for the detailed reply! Indeed, specificity prediction is a challenging problem, and I've also been interested in exploring potential avenues to improve it. I will try to run the original setup in the paper (i.e. 3 AlphaFold runs per target, using non-finetuned parameters, excluding nearby templates) and see if the results are consistent.

If I may ask, I'm curious about what you mean by "generating more data for training"? Would it be generating more structure data for training structure prediction, or using binding data as in the Motmaen et al. paper?

I'm excited to hear about the new things you are trying!

Best, Bear

junhaobearxiong avatar Nov 19 '23 18:11 junhaobearxiong

Hi Bear, Right, it would be generating (and finding!) more binding data for training the models. Though more structure would certainly help, too. They are just harder to come by... Take care, Phil


From: Junhao (Bear) Xiong @.> Sent: Sunday, November 19, 2023 10:13 AM To: phbradley/TCRdock @.> Cc: Bradley PhD, Phil @.>; Comment @.> Subject: Re: [phbradley/TCRdock] Decoy discrimination benchmark using finetuned model (Issue #10)

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Hi Phil!

Thank you so much for the detailed reply! Indeed, specificity prediction is a challenging problem, and I've been interested in exploring potential avenues to improve it. I will try to run the original setup in the paper (i.e. 3 AlphaFold runs per target, using non-finetuned parameters, excluding nearby templates) and see if the results are consistent.

If I may ask, I'm curious about what you mean by "generating more data for training"? Would it be generating more structure data for training structure prediction, or using binding data as in the Motmaen et al. paper?

I'm excited to hear about the new things you are trying!

Best, Bear

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/phbradley/TCRdock/issues/10*issuecomment-1817935510__;Iw!!GuAItXPztq0!mL95-UYpdGF8s_5u5EqP4ZNdXzV3vRrJgQNA98859kIDJRIRM35yuZ7Cpha2ftqvGoc2PC0Qqzz_hNRMghmfv1I0$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/ABBNCH3433LPCH44RBBIMBTYFJD5NAVCNFSM6AAAAAA7RFRAMWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJXHEZTKNJRGA__;!!GuAItXPztq0!mL95-UYpdGF8s_5u5EqP4ZNdXzV3vRrJgQNA98859kIDJRIRM35yuZ7Cpha2ftqvGoc2PC0Qqzz_hNRMgs3Ow0K9$. You are receiving this because you commented.Message ID: @.***>

phbradley avatar Nov 20 '23 04:11 phbradley