openfold
openfold copied to clipboard
Installation help: the output of scripts/install_third_party_dependencies.sh
Hello openfold team:
Thank you for your support.
I tried to install openfold on a RHEL7.9 server by following README installation section. The command $ scripts/install_third_party_dependencies.sh successfully installed many packages, I reached the following output which starts to have error: ....... done
To activate this environment, use $ conda activate openfold_venv To deactivate an active environment, use $ conda deactivate
Attempting to install FlashAttention fatal: destination path 'flash-attention' already exists and is not an empty directory. HEAD is now at 5b838a8... Apply dropout scaling to dQ and dK instead of to V (in bwd) --- my question: how to resolve this?
Warning: Torch did not find available GPUs on this system. If your intention is to cross-compile, this is not an error. By default, We cross-compile for Volta (compute capability 7.0), Turing (compute capability 7.5), and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0). If you wish to cross-compile for a single specific architecture, export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.
Traceback (most recent call last):
File "setup.py", line 83, in
--- my question: does the message export TORCH_CUDA_ARCH_LIST="compute capability" fix the TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ?
Thank you for advice.
Hi Openfold team:
To follow up my above post, I tried two things:
- remove directory flash-attention $ rm -fr flash-attention
- $ export TORCH_CUDA_ARCH_LIST="compute capability"
Next, with openfold_venv activated, I reran $ scripts/install_third_party_dependencies.sh
This time, after installed many python packages, I saw ....... Attempting to install FlashAttention ....... Warning: Torch did not find available GPUs on this system. If your intention is to cross-compile, this is not an error. By default, We cross-compile for Volta (compute capability 7.0), Turing (compute capability 7.5), and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0). If you wish to cross-compile for a single specific architecture, export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py. torch.version = 1.12.1
Traceback (most recent call last):
File "setup.py", line 105, in
followed by Downloading AlphaFold parameters, then, stopped with the following two lines: tar: .: implausibly old time stamp 1969-12-31 18:00:00 gzip: tests/test_data/sample_feats.pickle.gz: No such file or directory
I got lost!
Looks to me, "openfold_venv" conda environment has cudatoolkit installed. My server does not have a gpu card installed, does the command scripts/install_third_party_dependencies.sh has to be run on a GPU server? how can I verify my environment has nvcc?
I appreciate your help!
@scottschreckengaust @decarboxy @weitzner @timodonnell May any of you help?
I'm no expert but I think practically speaking OpenFold requires a GPU. See also #229 ("CPU inference is completely untested")
@timodonnell Thank you for your reply. #229 mentions an answer, unfortunately do not provide how to details.
Our cluster has both CPU and GPU nodes. If I can set up the conda environment successfully on a cpu only server, this will be helpful. Our GPU nodes do not have internet access.