CPU usage is not available anymore
I had installed Boltz-1 about 5 days ago and then I was able to run withouut a gpu using the --accelerator cpu option. But now I reinstalled it to use the new Boltz-1x and this option gives error as there is no gpu in my system. Can you check the --accelerator cpu option if it is still working in asystem without a gpu as before? I input FASTA file with two proteins sequences with arouund 1700 AA.
The error in the end was RuntimeError: No CUDA GPUs are available.
Then I run it in a syystem with a GPU but used --accelerator cpu option. This time it run but gave a different error as this in the end:
... ret = self.fn.run( File "/home/ec2-user/anaconda3/envs/envcBoltzx/lib/python3.10/site-packages/triton/runtime/jit.py", line 653, in run kernel.run(grid_0, grid_1, grid_2, stream, kernel.function, kernel.packed_metadata, launch_metadata, File "/home/ec2-user/anaconda3/envs/envcBoltzx/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 444, in __call__ self.launch(*args, **kwargs) ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?) Predicting DataLoader 0: 0%| | 0/1 [06:19<?, ?it/s]
Thanks for pointing this out, the latest code should now work!
Thanks, I test in my macosx M2 and now running!!
Only for install is required remove the "trifast>=0.1.11", dependence from the pyproject.toml file and obviously use only the cpu performance (--accelerator cpu).
Could a boltz 1.0.1 release be made on PyPi so that boltz will run on systems without Nvidia graphics? I made a ChimeraX user interface to run Boltz on Mac, Windows, and Linux computers without requiring an Nvidia GPU. Performance on Mac ARM GPUs is especially good, but broken in the boltz 1.0 release. Here are Boltz benchmarks on non-nvidia systems
https://www.rbvi.ucsf.edu/chimerax/data/boltz-apr2025/boltz_help.html#runtimes
Also the Boltz 1.0 install fails on Linux without an Nvidia graphics. Probably the existing fix remedies that but have not verified it.
Thanks for pointing this out, the latest code should now work!
Thanks. It now workes on a GPU available system with CPU mode but it still fails working on CPU only system without a GPU available.
"/home/galtay/.conda/envs/boltz_py311/lib/python3.11/site-packages/torch/cuda/__init__.py", line 372, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
I had this issue also this morning. As a quick and dirty fix, I modified the script primitives.py to skip using trifast, as follows:
- line 46 : set trifast_is_installed to 0
- line 516 and 517 : remove both lines to suppress the call to the function _trifast_attn
- line 658 to 695 : remove/comment the function _trifast_attn itself, as we don't need it in our scenario