shenggan
shenggan
OpenFold uses DeepSpeed mainly for its activation_checkpointing, bfloat16 training and ZeRO optimizer. DAP in FastFold is for model parallelism, which is not the same as the function provided by DeepSpeed....
Thank you for your interest. FastFold currently provides a high-performance distributed Evoformer implementation, which can be used with [OpenFold](https://github.com/aqlaboratory/openfold) if a complete training process is required. You can refer to...
Here is our config file, for your reference. [config.py](https://github.com/hpcaitech/FastFold/files/8520137/config.py.txt)
For the first to fifth points, we use the same settings as AlphaFold/OpenFold, template, extra MSA stack, structure model, and recycling are all turned on, and the number of recycling...
I update a new version of `inference.py` and `README.md`. Now no need `--model-device`, the scripts will use visible devices to do the inference. Usage of `inference.py` can refer to https://github.com/hpcaitech/FastFold#inference...
For Training, AlphaFold uses the Activation Checkpoint, which can be found in [PyTorch's checkpoint interface](https://pytorch.org/docs/stable/checkpoint.html) and [this paper](https://arxiv.org/abs/1604.06174). For Inference, because the representation in AlphaFold has two sequential dimensions, while...
I think you need CUDA environment to install fastfold. You can try to use `spack` install cuda for you non-root cpu node. Or you can submitted the installation to the...
Hi, I guess this issue is related to the version of gcc. If your gcc version is less than 7, please try to update your gcc version. You can refer...
Thank you for your suggestion. FastFold currently does not depend on cuda_ext of colossalai. But FastFold does rely on nvcc which is compatible with the installed torch. You can check...
Thank you for your question. 1. `dap_size` refers to how many devices are used for *Dynamic Axial Parallelism*. It can be set to the number of GPUs used in distributed...