openfold
openfold copied to clipboard
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
I met this problem: ```python RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`...
Hi First of all Great work, just a question : is this possible to pass a batch of protein sequences to the `alignment_runner.run(fasta_path, local_alignment_dir)` in the run_pretrained_openfold.py file and save...
according to debug the code, I find there is a problem with the data entering the model, some values which prefixed with template is nan
Hi, I'm training openfold with single node multi GPUs and multi nodes multi GPUs. I find something interesting. In the `environment.yml` file the deepspeed version is `deepspeed==0.5.3`. But the zero_to_fp32.py...
hi , I generated chain_data_cache.json,but when I run train_openfold.py, it will error in row 301 in openfold/data/data_module.py,because train_dataset is much larger than chain_data_cache.json,chain_data_cache.json cannot fully contain train_dataset
Hi, When i read the support information of AlphaFold2, I got confused about the "1.11.8 Reducing the memory consumption". It said that when using the technique called gradient checkpoint, the...
 "clusters-by-entity-40.txt" as the parameter cluster_file is required by generate_chain_data_cache.py. But I don't find this file in my path, so I want to know how the file "clusters-by-entity-40.txt" generates. Thanks...
Thank you for sharing your code! I am trying to train openfold, but the problem of loss being NAN persists, and the whole training hangs when this problem occurs. I...
While running OpenFold, I experienced following error at the stage of AMBER relaxiation 86% FileNotFoundError: [Errno 2] No such file or directory: '/opt/conda/lib/python3.7/site-packages/openfold/resources/stereo_chemical_props.txt' I used my sequences and default sequence...
PyTorch Lightning and DeepSpeed LR schedulers don't interact correctly at the moment. Follow [the PL issue](https://github.com/PyTorchLightning/pytorch-lightning/issues/11694) for updates. In the meantime, use `configure_optimizers` in `train_openfold` to add LR scheduling logic.