ColabFold
ColabFold copied to clipboard
Update ColabFold to use AlphaFold 2.3.0 to predict larger structures
Do you plan to update ColabFold to use AlphaFold 2.3.0? It uses less memory running multimer predictions allowing larger structure predictions that would be very valuable for researchers.
Here are the AlphaFold 2.3.0 release notes
https://github.com/deepmind/alphafold/releases/tag/v2.3.0
I guess an update would require updating the Steinegger lab AlphaFold github fork
https://github.com/steineggerlab/alphafold
I post this issue at the ColabFold github because the Steinegger AlphaFold fork does not have issue tracking enabled.
Hi everyone!
I saw that @sokrypton did a pull request of AlphaFold 2.3 (https://github.com/steineggerlab/alphafold/pull/3) Did anyone tested it ?
Cheers, Thibault.
I got version 2.3.0 working in colab if you want to try: https://colab.research.google.com/github/sokrypton/ColabFold/blob/beta/AlphaFold2.ipynb
or if you want to install locally, grab the beta branch
pip install -q --no-warn-conflicts "colabfold[alphafold-minus-jax] @ git+https://github.com/sokrypton/ColabFold@beta"
Thank you so muuuch =D It works great on my singularity container :-)
I just have 4 warnings, they are also on google colab, but I guess they are just warning and do not impact quality of models right ?
2023-01-18 09:14:17,494 Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
2023-01-18 09:14:18,270 Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: Interpreter Host CUDA
2023-01-18 09:14:18,271 Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client'
2023-01-18 09:14:18,272 Unable to initialize backend 'plugin': xla_extension has no attributes named get_plugin_device_client. Compile TensorFlow with //tensorflow/compiler/xla/python:enable_plugin_device set to true (defaults to false) to enable this.
@tubiana Not the expert, but these look pretty harmless to me.
@tubiana jax does a test for available TPUs before trying to use GPUs. this is where the alert is coming from. I'm looking to see if we can suppress this warning.
Fantastic thank you =D
Great job getting 2.3.0 working! There is an improvement in AlphaFold 2.3.1 that fixed memory use problems with energy minimization
https://github.com/deepmind/alphafold/releases/tag/v2.3.1
@tomgoddard thanks, I've updated to v2.3.1
Great! I will switch ChimeraX to using the new ColabFold when it is out of beta.
I tested the beta version and with the parameter max_msa='16:32', I get the following error:
[/usr/local/lib/python3.8/dist-packages/colabfold/batch.py](https://localhost:8080/#) in run(queries, result_dir, num_models, num_recycles, model_order, is_complex, num_ensemble, model_type, msa_mode, use_templates, custom_template_path, use_amber, keep_existing_results, rank_by, pair_mode, data_dir, host_url, random_seed, num_seeds, stop_at_score, recompile_padding, zip_results, prediction_callback, save_single_representations, save_pair_representations, training, use_gpu_relax, stop_at_score_below, dpi, max_msa, fuse)
1288 save_representations = save_single_representations or save_pair_representations
1289
-> 1290 model_runner_and_params = load_models_and_params(
1291 num_models,
1292 use_templates,
[/usr/local/lib/python3.8/dist-packages/colabfold/alphafold/models.py](https://localhost:8080/#) in load_models_and_params(num_models, use_templates, num_recycle, num_ensemble, model_order, model_suffix, data_dir, stop_at_score, rank_by, return_representations, training, max_msa, fuse)
74 int(x) for x in max_msa.split(":")
75 ]
**---> 76 model_config.data.eval.max_msa_clusters = max_msa_clusters**
77 model_config.data.common.max_extra_msa = max_extra_msa
78
[/usr/local/lib/python3.8/dist-packages/ml_collections/config_dict/config_dict.py](https://localhost:8080/#) in __getattr__(self, attribute)
827 return self[attribute]
828 except KeyError as e:
--> 829 raise AttributeError(e)
830
831 def __setitem__(self, key, value):
Are you getting same error with main branch?
Are you getting same error with main branch?
Yes, tested both.
Update: worked now, didn't see "use_cluster_profile"
The beta colab running AF2.3 has been working beautifully for multimers, until tonight it suddenly and frustratingly has been crashing right at the last stage, the model reranking by multimer, and it gives the following curt error message below. Wondering if it has to do with an expected update to 2.3.1? (Or leaving beta?) Thx in advance for your kind help!
IndexError Traceback (most recent call last)
IndexError: list index out of range
Please refresh the notebook. I made a minor change to print a few extra zeros in the filename (this is to help sort files later when number of models goes beyond 10).
In the AlphaFold.ipynb there is a statement that says: This Colab has a small drop in average accuracy for multimers compared to local AlphaFold installation, for full multimer accuracy it is highly recommended to run AlphaFold locally. Does the same happens for this one, or does this one has the same accuracy running AlphaFold locally? What about localcolabfold? https://github.com/YoshitakaMo/localcolabfold Thanks a lot!
@rcastellsg I suspect this refers to the accuracy of the multiple sequence alignment. Colabfold circumvents this problem entirely, since it outsources the MSA generation. (Also I could be wrong:)