Tom Goddard
Tom Goddard
Another way to increase the maximum size of predicted structures with a given amount of VRAM is to use 16-bit floating point (bfloat16) using Nvidia GPUs as described in #263....
Thanks for pointing out the typo. I have fixed it in this pull request. I fixed it a week ago but forgot to update this fork because I decided to...
In Boltz 2 the use of bfloat16 is set by this line in the predict() function in main.py precision=32 if model == "boltz1" else "bf16-mixed", and the stderr output of...
Makes sense to close this. As noted in the above comment https://github.com/jwohlwend/boltz/pull/264#issuecomment-2960942079 Boltz 2 is using bfloat16 for some operations because it sets precision = "bf16-mixed" and this seems to...
> Hmm interesting about the CPU.. Maybe we should default to 32 on CPU? I think precision should be bf16-mixed only when CUDA is used, and 32 in all other...
I experimented with using more than 8 jackhmmer threads with AlphaFold 3 to speed up MSA calculation and found it did not give any significant speedup because the speed bottleneck...
When the SMILES string contains more than one component (pieces that are not covalently connected) as indicated by a "." in the string, then affinity prediction fails with the error...
I submitted a pull request to fix these options, #602.
I implemented these options because when predicting thousands of structure the PAE and PDE files can take up most of the disk space and so it is desirable to not...
The mmseqs binary I used is the release 18 precompiled distribution from the mmseqs2 github https://github.com/soedinglab/MMseqs2/releases/download/18-8cc5c/mmseqs-osx-universal.tar.gz It may be that the crash is due to a multi-threading issue that depends...