openfold
openfold copied to clipboard
Run OpenFold on CPU
Hello,
I have issues when running openfold on a CPU.
When I execute the run_pretrained_openfold.py script with the --model_device cpu argument set, I get the following error:
Traceback (most recent call last):
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/run_pretrained_openfold.py", line 387, in <module>
main(args)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/run_pretrained_openfold.py", line 254, in main
out = run_model(model, processed_feature_dict, tag, args.output_dir)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/utils/script_utils.py", line 159, in run_model
out = model(batch)
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/model.py", line 512, in forward
outputs, m_1_prev, z_prev, x_prev = self.iteration(
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/model.py", line 366, in iteration
z = self.extra_msa_stack(
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/evoformer.py", line 1007, in forward
m, z = b(m, z)
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/evoformer.py", line 518, in forward
self.msa_att_row(
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/msa.py", line 266, in forward
m = self._chunk(
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/msa.py", line 121, in _chunk
return chunk_layer(
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/utils/chunk_utils.py", line 299, in chunk_layer
output_chunk = layer(**chunks)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/msa.py", line 101, in fn
return self.mha(
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/primitives.py", line 492, in forward
o = attention_core(q, k, v, *((biases + [None] * 2)[:2]))
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/utils/kernel/attention_core.py", line 47, in forward
attn_core_inplace_cuda.forward_(
RuntimeError: input must be a CUDA tensor
This tells me, I need to pass a CUDA tensor to some attention thing, but I run the code on CPU, there should be no CUDA involved?!
This is the environment (env.txt) I'm using on a normal linux 64-bit OS.
Thank for any help in advance. Roman
I just merged a PR that should make this possible. LMK if this still doesn't run.
Hello,
Thank you for approaching this issue, unfortunately, it is not yet solved. I pulled the latest version of the project and run the same prediction as in the initial comment, but still the error occurrs. Here's the new traceback:
Traceback (most recent call last):
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/run_pretrained_openfold.py", line 391, in <module>
main(args)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/run_pretrained_openfold.py", line 254, in main
out = run_model(model, processed_feature_dict, tag, args.output_dir)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/utils/script_utils.py", line 159, in run_model
out = model(batch)
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/model.py", line 512, in forward
outputs, m_1_prev, z_prev, x_prev = self.iteration(
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/model.py", line 366, in iteration
z = self.extra_msa_stack(
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/evoformer.py", line 1007, in forward
m, z = b(m, z)
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/evoformer.py", line 518, in forward
self.msa_att_row(
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/msa.py", line 266, in forward
m = self._chunk(
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/msa.py", line 121, in _chunk
return chunk_layer(
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/utils/chunk_utils.py", line 299, in chunk_layer
output_chunk = layer(**chunks)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/msa.py", line 101, in fn
return self.mha(
File "/home/rjo21/anaconda3/envs/fold_serv2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/model/primitives.py", line 492, in forward
o = attention_core(q, k, v, *((biases + [None] * 2)[:2]))
File "/scratch/SCRATCH_SAS/roman/fold_test/openfold/openfold/utils/kernel/attention_core.py", line 47, in forward
attn_core_inplace_cuda.forward_(
RuntimeError: input must be a CUDA tensor
Environment, OS, and other hardward and software parameters are the same as if the trial above.
Thank for any help. Roman
I'm also seeing this error, but this hack still seems to work