Problem with the triton package when calling boltz predict
Hi, I've installed the latest version of Boltz from the sources. I'm experiencing an error related to the triton package:
Exception: Failed to import Triton-based component: triangle_multiplicative_update:
Not Supported Please make sure to install triton==3.3.0. Other versions may not work! Predicting DataLoader 0: 0%| | 0/1 [00:01<?, ?it/s]
But inspecting the version of the triton package I receive the following:
(boltz2-env) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ pip3 list | grep triton triton 3.3.0
Any suggestion?
Also for the warning:
"You are using a CUDA device ('NVIDIA GeForce RTX 4070 Laptop GPU') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance."
How it is possible to make the suggested settings when calling boltz?
Thanks.
Saverio
PS: The following is the list of the used commands:
(pyenv) xxxx@xxxx-Predator-PHN16-71:~$ source boltz2-env/bin/activate (boltz2-env) xxxx@xxxx-Predator-PHN16-71:~$ boltz --help Usage: boltz [OPTIONS] COMMAND [ARGS]...
Boltz.
Options: --help Show this message and exit.
Commands: predict Run predictions with Boltz.
(boltz2-env) xxxx@xxxx-Predator-PHN16-71:~$ cd boltz_2_affinity_example/
(boltz2-env) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ boltz predict /home/xxxx/sources/boltz-2.1.1/examples/affinity.yaml --use_msa_server
Checking input data.
All inputs are already processed.
Processing 0 inputs with 0 threads.
0it [00:00, ?it/s]
Using bfloat16 Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/home/xxxx/boltz2-env/lib/python3.12/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, tensorboardX has been removed as a dependency of the pytorch_lightning package, due to potential conflicts with other packages in the ML ecosystem. For this reason, logger=True will use CSVLogger as the default logger, unless the tensorboard or tensorboardX packages are found. Please pip install lightning[extra] or one of them to enable TensorBoard support by default
Running structure prediction for 1 input.
/home/xxxx/boltz2-env/lib/python3.12/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.5.0.post0, which is newer than your current Lightning version: v2.5.0
You are using a CUDA device ('NVIDIA GeForce RTX 4070 Laptop GPU') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/xxxx/boltz2-env/bin/boltz", line 8, in
(boltz2-env) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ pip3 list | grep triton
triton 3.3.0
The multiplication precision is currently hard-coded to highest, but there's a PR (#413) to make it a command-line option.
Hi, thanks. For the error:
Exception: Failed to import Triton-based component: triangle_multiplicative_update:
Not Supported. Please make sure to install triton==3.3.0. Other versions may not work!
What can be done? I'm using Ubuntu 24.04.
Thanks.
Saverio
I have the same issue with the newest versions, as a temporary workaround I'm now using boltz==2.0.3 and that is working for me.
Ok. Thanks. I hope that this problem will be fixed.
Saverio
I also was also not able to solve the triton issue with other versions of packages. Since boltz 2.1.0, the prediction fails every time. I have to use 2.0.3.
Exactly the same problem here. I hope the Boltz team will publish an updated dependency list soon.
I address this issue please try open /home/user/anaconda3/envs/boltz2environmen/lib/python3.12/site-packages/cuequivariance_ops/triton/cache_manager.py and change gpu_core_count = 10240οΌyour cuda core countοΌ
Hi, I've installed boltz2 using python3 -m venv boltz2-env and then compiling from the sources. I've modified cache_manager.py in the directory /home/xxxx/boltz2-env/lib/python3.12/site-packages/cuequivariance_ops/triton from gpu_core_count = pynvml.nvmlDeviceGetNumGpuCores(handle) to gpu_core_count = 5888 # rtx 4070 mobile cuda core count but the error remains. Any suggestions?
Thanks.
Saverio
maybe you should reinstall in a new conda env Use conda create -n boltz2new python=3.12; then use pip install boltz -U , when all done you can change that /home/user/anaconda3/envs/boltz2environmen/lib/python3.12/site-packages/cuequivariance_ops/triton/cache_manager.py replace gpu_core_count = pynvml.nvmlDeviceGetNumGpuCores(handle) with gpu_core_count = 5888 . I think that will work . If not please paste the report. In my opinion I suggest you can try on WSL or linux and change a GPU (memory>16GB)
Thanks GPT let me tell you why
β Error Explanation
The error occurs because:
Inside
cuequivariance_ops_torch, the code tries to retrieve GPU information usingpynvml, specifically calling a method that is not supported by your current GPU or driver:pynvml.nvmlDeviceGetNumGpuCores(handle)
Which results in the error:
pynvml.NVMLError_NotSupported
π Root Cause Analysis
The root of the issue:
The function nvmlDeviceGetNumGpuCores is not part of the official NVIDIA NVML API, and it is not a standard function in pynvml.
You can verify this with:
import pynvml
print(dir(pynvml)) # You won't find nvmlDeviceGetNumGpuCores here
This usually means:
- The installed
pynvml.pyfile has been custom-modified by some library (possiblycuequivariance), or - The package is calling a private or unofficial helper function thatβs not part of the public API.
β Solution
Modify the source code to bypass the invalid function call
Locate the error source:
/home/wyb/.pixi/envs/default/lib/python3.12/site-packages/cuequivariance_ops/triton/cache_manager.py
Find this line:
gpu_core_count = pynvml.nvmlDeviceGetNumGpuCores(handle)
And manually comment it out or replace it with something like:
gpu_core_count = 128 # Or set it based on your GPU's actual CUDA core count
If you're using an RTX 4080 SUPER, you could replace it with:
gpu_core_count = 10240
Or, to make it more robust:
try:
gpu_core_count = pynvml.nvmlDeviceGetNumGpuCores(handle)
except pynvml.NVMLError_NotSupported:
gpu_core_count = 10240 # Fallback value for RTX 4080 SUPER
Hi, I've used your suggestions ( before Santo (( italian )) GTP but I receive the same error. I've modified gpu_core_count = 5888
The command and its output:
(boltz2) xxxx@xxxx-Predator-PHN16-71:~$ cd boltz_2_affinity_example/
(boltz2) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ boltz predict /home/xxxx/sources/boltz-2.1.1/examples/affinity.yaml --use_msa_server
Checking input data.
All inputs are already processed.
Processing 0 inputs with 0 threads.
0it [00:00, ?it/s]
Using bfloat16 Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/home/xxxx/anaconda3/envs/boltz2/lib/python3.12/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, tensorboardX has been removed as a dependency of the pytorch_lightning package, due to potential conflicts with other packages in the ML ecosystem. For this reason, logger=True will use CSVLogger as the default logger, unless the tensorboard or tensorboardX packages are found. Please pip install lightning[extra] or one of them to enable TensorBoard support by default
Running structure prediction for 1 input.
/home/xxxx/anaconda3/envs/boltz2/lib/python3.12/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.5.0.post0, which is newer than your current Lightning version: v2.5.0
You are using a CUDA device ('NVIDIA GeForce RTX 4070 Laptop GPU') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/xxxx/anaconda3/envs/boltz2/bin/boltz", line 8, in
(boltz2) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ pip3 list | grep triton triton 3.3.1 (boltz2) xxxx@xxxx-Predator-PHN16-71
Maybe this?
Thanks.
Saverio
Used commands for the installation of boltz 2 with anaconda:
(boltz2-env) xxxx@xxxx-Predator-PHN16-71:~$ deactivate xxxx@xxxx-Predator-PHN16-71:~$ rm -rf boltz2-env/ xxxx@xxxx-Predator-PHN16-71:~$ which python xxxx@xxxx-Predator-PHN16-71:~$ conda create -n boltz2 python=3.12 Retrieving notices: ...working... done Channels:
- conda-forge
- nodefaults Platform: linux-64 Collecting package metadata (repodata.json): done Solving environment: done
Package Plan
environment location: /home/xxxx/anaconda3/envs/boltz2
added / updated specs: - python=3.12
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2025.7.9 | hbd8a1cb_0 149 KB conda-forge
icu-75.1 | he02047a_0 11.6 MB conda-forge
ld_impl_linux-64-2.44 | h1423503_1 660 KB conda-forge
libexpat-2.7.0 | h5888daf_0 73 KB conda-forge
libgcc-15.1.0 | h767d61c_3 806 KB conda-forge
libgcc-ng-15.1.0 | h69a702a_3 28 KB conda-forge
libgomp-15.1.0 | h767d61c_3 437 KB conda-forge
liblzma-5.8.1 | hb9d3cd8_2 110 KB conda-forge
libnsl-2.0.1 | hb9d3cd8_1 33 KB conda-forge
libsqlite-3.50.2 | hee844dc_2 914 KB conda-forge
libstdcxx-15.1.0 | h8f9b012_3 3.7 MB conda-forge
libstdcxx-ng-15.1.0 | h4852527_3 28 KB conda-forge
openssl-3.5.1 | h7b32b05_0 3.0 MB conda-forge
pip-25.1.1 | pyh8b19718_0 1.2 MB conda-forge
python-3.12.11 |h9e4cc4f_0_cpython 30.0 MB conda-forge
setuptools-80.9.0 | pyhff2d567_0 731 KB conda-forge
tk-8.6.13 |noxft_hd72426e_102 3.1 MB conda-forge
------------------------------------------------------------
Total: 56.5 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge _openmp_mutex conda-forge/linux-64::_openmp_mutex-4.5-2_gnu bzip2 conda-forge/linux-64::bzip2-1.0.8-h4bc722e_7 ca-certificates conda-forge/noarch::ca-certificates-2025.7.9-hbd8a1cb_0 icu conda-forge/linux-64::icu-75.1-he02047a_0 ld_impl_linux-64 conda-forge/linux-64::ld_impl_linux-64-2.44-h1423503_1 libexpat conda-forge/linux-64::libexpat-2.7.0-h5888daf_0 libffi conda-forge/linux-64::libffi-3.4.6-h2dba641_1 libgcc conda-forge/linux-64::libgcc-15.1.0-h767d61c_3 libgcc-ng conda-forge/linux-64::libgcc-ng-15.1.0-h69a702a_3 libgomp conda-forge/linux-64::libgomp-15.1.0-h767d61c_3 liblzma conda-forge/linux-64::liblzma-5.8.1-hb9d3cd8_2 libnsl conda-forge/linux-64::libnsl-2.0.1-hb9d3cd8_1 libsqlite conda-forge/linux-64::libsqlite-3.50.2-hee844dc_2 libstdcxx conda-forge/linux-64::libstdcxx-15.1.0-h8f9b012_3 libstdcxx-ng conda-forge/linux-64::libstdcxx-ng-15.1.0-h4852527_3 libuuid conda-forge/linux-64::libuuid-2.38.1-h0b41bf4_0 libxcrypt conda-forge/linux-64::libxcrypt-4.4.36-hd590300_1 libzlib conda-forge/linux-64::libzlib-1.3.1-hb9d3cd8_2 ncurses conda-forge/linux-64::ncurses-6.5-h2d0b736_3 openssl conda-forge/linux-64::openssl-3.5.1-h7b32b05_0 pip conda-forge/noarch::pip-25.1.1-pyh8b19718_0 python conda-forge/linux-64::python-3.12.11-h9e4cc4f_0_cpython readline conda-forge/linux-64::readline-8.2-h8c095d6_2 setuptools conda-forge/noarch::setuptools-80.9.0-pyhff2d567_0 tk conda-forge/linux-64::tk-8.6.13-noxft_hd72426e_102 tzdata conda-forge/noarch::tzdata-2025b-h78e105d_0 wheel conda-forge/noarch::wheel-0.45.1-pyhd8ed1ab_1
Proceed ([y]/n)? y
Downloading and Extracting Packages:
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
To activate this environment, use
$ conda activate boltz2
To deactivate an active environment, use
$ conda deactivate
xxxx@xxxx-Predator-PHN16-71:~$ conda activate boltz2
(boltz2) xxxx@xxxx-Predator-PHN16-71:~$ pip install boltz -U
Collecting boltz
Using cached boltz-2.1.1-py3-none-any.whl.metadata (7.1 kB)
Collecting torch>=2.2 (from boltz)
Using cached torch-2.7.1-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (29 kB)
Collecting numpy<2.0,>=1.26 (from boltz)
Using cached numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting hydra-core==1.3.2 (from boltz)
Using cached hydra_core-1.3.2-py3-none-any.whl.metadata (5.5 kB)
Collecting pytorch-lightning==2.5.0 (from boltz)
Using cached pytorch_lightning-2.5.0-py3-none-any.whl.metadata (21 kB)
Collecting rdkit>=2024.3.2 (from boltz)
Using cached rdkit-2025.3.3-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (4.0 kB)
Collecting dm-tree==0.1.8 (from boltz)
Using cached dm_tree-0.1.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.9 kB)
Collecting requests==2.32.3 (from boltz)
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting pandas>=2.2.2 (from boltz)
Using cached pandas-2.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (91 kB)
Collecting types-requests (from boltz)
Using cached types_requests-2.32.4.20250611-py3-none-any.whl.metadata (2.1 kB)
Collecting einops==0.8.0 (from boltz)
Using cached einops-0.8.0-py3-none-any.whl.metadata (12 kB)
Collecting einx==0.3.0 (from boltz)
Using cached einx-0.3.0-py3-none-any.whl.metadata (6.9 kB)
Collecting fairscale==0.4.13 (from boltz)
Using cached fairscale-0.4.13-py3-none-any.whl
Collecting mashumaro==3.14 (from boltz)
Using cached mashumaro-3.14-py3-none-any.whl.metadata (114 kB)
Collecting modelcif==1.2 (from boltz)
Using cached modelcif-1.2-py3-none-any.whl
Collecting wandb==0.18.7 (from boltz)
Using cached wandb-0.18.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.7 kB)
Collecting click==8.1.7 (from boltz)
Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting pyyaml==6.0.2 (from boltz)
Using cached PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting biopython==1.84 (from boltz)
Using cached biopython-1.84-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)
Collecting scipy==1.13.1 (from boltz)
Using cached scipy-1.13.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
Collecting numba==0.61.0 (from boltz)
Using cached numba-0.61.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.8 kB)
Collecting gemmi==0.6.5 (from boltz)
Using cached gemmi-0.6.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.3 kB)
Collecting scikit-learn==1.6.1 (from boltz)
Using cached scikit_learn-1.6.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (18 kB)
Collecting chembl_structure_pipeline==1.2.2 (from boltz)
Using cached chembl_structure_pipeline-1.2.2-py3-none-any.whl.metadata (3.9 kB)
Collecting cuequivariance_ops_cu12>=0.5.0 (from boltz)
Using cached cuequivariance_ops_cu12-0.5.1-py3-none-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (20 kB)
Collecting cuequivariance_ops_torch_cu12>=0.5.0 (from boltz)
Using cached cuequivariance_ops_torch_cu12-0.5.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (92 kB)
Collecting cuequivariance_torch>=0.5.0 (from boltz)
Using cached cuequivariance_torch-0.5.1-py3-none-any.whl.metadata (15 kB)
Requirement already satisfied: setuptools>=46.4.0 in ./anaconda3/envs/boltz2/lib/python3.12/site-packages (from chembl_structure_pipeline==1.2.2->boltz) (80.9.0)
Collecting sympy (from einx==0.3.0->boltz)
Using cached sympy-1.14.0-py3-none-any.whl.metadata (12 kB)
Collecting frozendict (from einx==0.3.0->boltz)
Using cached frozendict-2.4.6-py312-none-any.whl.metadata (23 kB)
Collecting omegaconf<2.4,>=2.2 (from hydra-core==1.3.2->boltz)
Using cached omegaconf-2.3.0-py3-none-any.whl.metadata (3.9 kB)
Collecting antlr4-python3-runtime==4.9.* (from hydra-core==1.3.2->boltz)
Using cached antlr4_python3_runtime-4.9.3-py3-none-any.whl
Collecting packaging (from hydra-core==1.3.2->boltz)
Using cached packaging-25.0-py3-none-any.whl.metadata (3.3 kB)
Collecting typing-extensions>=4.1.0 (from mashumaro==3.14->boltz)
Downloading typing_extensions-4.14.1-py3-none-any.whl.metadata (3.0 kB)
Collecting ihm>=1.7 (from modelcif==1.2->boltz)
Downloading ihm-2.7.tar.gz (392 kB)
Preparing metadata (setup.py) ... done
Collecting llvmlite<0.45,>=0.44.0dev0 (from numba==0.61.0->boltz)
Using cached llvmlite-0.44.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.0 kB)
Collecting tqdm>=4.57.0 (from pytorch-lightning==2.5.0->boltz)
Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting fsspec>=2022.5.0 (from fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached fsspec-2025.5.1-py3-none-any.whl.metadata (11 kB)
Collecting torchmetrics>=0.7.0 (from pytorch-lightning==2.5.0->boltz)
Downloading torchmetrics-1.7.4-py3-none-any.whl.metadata (21 kB)
Collecting lightning-utilities>=0.10.0 (from pytorch-lightning==2.5.0->boltz)
Using cached lightning_utilities-0.14.3-py3-none-any.whl.metadata (5.6 kB)
Collecting charset-normalizer<4,>=2 (from requests==2.32.3->boltz)
Using cached charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests==2.32.3->boltz)
Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests==2.32.3->boltz)
Using cached urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests==2.32.3->boltz)
Downloading certifi-2025.7.9-py3-none-any.whl.metadata (2.4 kB)
Collecting joblib>=1.2.0 (from scikit-learn==1.6.1->boltz)
Using cached joblib-1.5.1-py3-none-any.whl.metadata (5.6 kB)
Collecting threadpoolctl>=3.1.0 (from scikit-learn==1.6.1->boltz)
Using cached threadpoolctl-3.6.0-py3-none-any.whl.metadata (13 kB)
Collecting docker-pycreds>=0.4.0 (from wandb==0.18.7->boltz)
Using cached docker_pycreds-0.4.0-py2.py3-none-any.whl.metadata (1.8 kB)
Collecting gitpython!=3.1.29,>=1.0.0 (from wandb==0.18.7->boltz)
Using cached GitPython-3.1.44-py3-none-any.whl.metadata (13 kB)
Collecting platformdirs (from wandb==0.18.7->boltz)
Using cached platformdirs-4.3.8-py3-none-any.whl.metadata (12 kB)
Collecting protobuf!=4.21.0,!=5.28.0,<6,>=3.19.0 (from wandb==0.18.7->boltz)
Using cached protobuf-5.29.5-cp38-abi3-manylinux2014_x86_64.whl.metadata (592 bytes)
Collecting psutil>=5.0.0 (from wandb==0.18.7->boltz)
Using cached psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (22 kB)
Collecting sentry-sdk>=2.0.0 (from wandb==0.18.7->boltz)
Using cached sentry_sdk-2.32.0-py2.py3-none-any.whl.metadata (10 kB)
Collecting setproctitle (from wandb==0.18.7->boltz)
Using cached setproctitle-1.3.6-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)
Collecting nvidia-cublas-cu12>=12.5.0 (from cuequivariance_ops_cu12>=0.5.0->boltz)
Using cached nvidia_cublas_cu12-12.9.1.4-py3-none-manylinux_2_27_x86_64.whl.metadata (1.7 kB)
Collecting pynvml (from cuequivariance_ops_cu12>=0.5.0->boltz)
Using cached pynvml-12.0.0-py3-none-any.whl.metadata (5.4 kB)
Collecting cuequivariance (from cuequivariance_torch>=0.5.0->boltz)
Using cached cuequivariance-0.5.1-py3-none-any.whl.metadata (15 kB)
Collecting six>=1.4.0 (from docker-pycreds>=0.4.0->wandb==0.18.7->boltz)
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting aiohttp!=4.0.0a0,!=4.0.0a1 (from fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Downloading aiohttp-3.12.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.6 kB)
Collecting aiohappyeyeballs>=2.5.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)
Collecting aiosignal>=1.4.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Downloading aiosignal-1.4.0-py3-none-any.whl.metadata (3.7 kB)
Collecting attrs>=17.3.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
Collecting frozenlist>=1.1.1 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached frozenlist-1.7.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (18 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached multidict-6.6.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (5.3 kB)
Collecting propcache>=0.2.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached propcache-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)
Collecting yarl<2.0,>=1.17.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>=2022.5.0->pytorch-lightning==2.5.0->boltz)
Using cached yarl-1.20.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (73 kB)
Collecting gitdb<5,>=4.0.1 (from gitpython!=3.1.29,>=1.0.0->wandb==0.18.7->boltz)
Using cached gitdb-4.0.12-py3-none-any.whl.metadata (1.2 kB)
Collecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->gitpython!=3.1.29,>=1.0.0->wandb==0.18.7->boltz)
Using cached smmap-5.0.2-py3-none-any.whl.metadata (4.3 kB)
Collecting msgpack (from ihm>=1.7->modelcif==1.2->boltz)
Using cached msgpack-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.4 kB)
Collecting python-dateutil>=2.8.2 (from pandas>=2.2.2->boltz)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas>=2.2.2->boltz)
Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas>=2.2.2->boltz)
Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting Pillow (from rdkit>=2024.3.2->boltz)
Using cached pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (9.0 kB)
Collecting filelock (from torch>=2.2->boltz)
Using cached filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)
Collecting networkx (from torch>=2.2->boltz)
Using cached networkx-3.5-py3-none-any.whl.metadata (6.3 kB)
Collecting jinja2 (from torch>=2.2->boltz)
Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.6.77 (from torch>=2.2->boltz)
Using cached nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.6.77 (from torch>=2.2->boltz)
Using cached nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.6.80 (from torch>=2.2->boltz)
Using cached nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==9.5.1.17 (from torch>=2.2->boltz)
Using cached nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12>=12.5.0 (from cuequivariance_ops_cu12>=0.5.0->boltz)
Using cached nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.3.0.4 (from torch>=2.2->boltz)
Using cached nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.7.77 (from torch>=2.2->boltz)
Using cached nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.7.1.2 (from torch>=2.2->boltz)
Using cached nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.5.4.2 (from torch>=2.2->boltz)
Using cached nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparselt-cu12==0.6.3 (from torch>=2.2->boltz)
Using cached nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)
Collecting nvidia-nccl-cu12==2.26.2 (from torch>=2.2->boltz)
Using cached nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.0 kB)
Collecting nvidia-nvtx-cu12==12.6.77 (from torch>=2.2->boltz)
Using cached nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nvjitlink-cu12==12.6.85 (from torch>=2.2->boltz)
Using cached nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufile-cu12==1.11.1.6 (from torch>=2.2->boltz)
Using cached nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting triton==3.3.1 (from torch>=2.2->boltz)
Using cached triton-3.3.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.5 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy->einx==0.3.0->boltz)
Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Collecting opt-einsum (from cuequivariance->cuequivariance_torch>=0.5.0->boltz)
Using cached opt_einsum-3.4.0-py3-none-any.whl.metadata (6.3 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=2.2->boltz)
Using cached MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting nvidia-ml-py<13.0.0a0,>=12.0.0 (from pynvml->cuequivariance_ops_cu12>=0.5.0->boltz)
Using cached nvidia_ml_py-12.575.51-py3-none-any.whl.metadata (9.3 kB)
Using cached boltz-2.1.1-py3-none-any.whl (262 kB)
Using cached biopython-1.84-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB)
Using cached chembl_structure_pipeline-1.2.2-py3-none-any.whl (17 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached dm_tree-0.1.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (152 kB)
Using cached einops-0.8.0-py3-none-any.whl (43 kB)
Using cached einx-0.3.0-py3-none-any.whl (102 kB)
Using cached gemmi-0.6.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB)
Using cached hydra_core-1.3.2-py3-none-any.whl (154 kB)
Using cached mashumaro-3.14-py3-none-any.whl (92 kB)
Using cached numba-0.61.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.9 MB)
Using cached pytorch_lightning-2.5.0-py3-none-any.whl (819 kB)
Using cached PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (767 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached scikit_learn-1.6.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.1 MB)
Using cached scipy-1.13.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.2 MB)
Using cached wandb-0.18.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.1 MB)
Using cached charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (148 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached llvmlite-0.44.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (42.4 MB)
Using cached numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)
Using cached omegaconf-2.3.0-py3-none-any.whl (79 kB)
Using cached protobuf-5.29.5-cp38-abi3-manylinux2014_x86_64.whl (319 kB)
Using cached urllib3-2.5.0-py3-none-any.whl (129 kB)
Downloading certifi-2025.7.9-py3-none-any.whl (159 kB)
Using cached cuequivariance_ops_cu12-0.5.1-py3-none-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (31.1 MB)
Using cached cuequivariance_ops_torch_cu12-0.5.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (196 kB)
Using cached cuequivariance_torch-0.5.1-py3-none-any.whl (56 kB)
Using cached docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Using cached fsspec-2025.5.1-py3-none-any.whl (199 kB)
Downloading aiohttp-3.12.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 1.7/1.7 MB 6.9 MB/s eta 0:00:00
Using cached multidict-6.6.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (256 kB)
Using cached yarl-1.20.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (355 kB)
Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)
Downloading aiosignal-1.4.0-py3-none-any.whl (7.5 kB)
Using cached attrs-25.3.0-py3-none-any.whl (63 kB)
Using cached frozenlist-1.7.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (241 kB)
Using cached GitPython-3.1.44-py3-none-any.whl (207 kB)
Using cached gitdb-4.0.12-py3-none-any.whl (62 kB)
Using cached smmap-5.0.2-py3-none-any.whl (24 kB)
Using cached joblib-1.5.1-py3-none-any.whl (307 kB)
Using cached lightning_utilities-0.14.3-py3-none-any.whl (28 kB)
Using cached packaging-25.0-py3-none-any.whl (66 kB)
Using cached pandas-2.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.0 MB)
Using cached propcache-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (224 kB)
Using cached psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (277 kB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB)
Using cached rdkit-2025.3.3-cp312-cp312-manylinux_2_28_x86_64.whl (34.8 MB)
Using cached sentry_sdk-2.32.0-py2.py3-none-any.whl (356 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached threadpoolctl-3.6.0-py3-none-any.whl (18 kB)
Using cached torch-2.7.1-cp312-cp312-manylinux_2_28_x86_64.whl (821.0 MB)
Using cached nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (393.1 MB)
Using cached nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (8.9 MB)
Using cached nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl (23.7 MB)
Using cached nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (897 kB)
Using cached nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl (571.0 MB)
Using cached nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (200.2 MB)
Using cached nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (1.1 MB)
Using cached nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (56.3 MB)
Using cached nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (158.2 MB)
Using cached nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (216.6 MB)
Using cached nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl (156.8 MB)
Using cached nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (201.3 MB)
Using cached nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (19.7 MB)
Using cached nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89 kB)
Using cached triton-3.3.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (155.7 MB)
Using cached sympy-1.14.0-py3-none-any.whl (6.3 MB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Downloading torchmetrics-1.7.4-py3-none-any.whl (963 kB)
ββββββββββββββββββββββββββββββββββββββββ 963.5/963.5 kB 7.6 MB/s eta 0:00:00
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Downloading typing_extensions-4.14.1-py3-none-any.whl (43 kB)
Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB)
Using cached cuequivariance-0.5.1-py3-none-any.whl (126 kB)
Using cached filelock-3.18.0-py3-none-any.whl (16 kB)
Using cached frozendict-2.4.6-py312-none-any.whl (16 kB)
Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)
Using cached MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB)
Using cached msgpack-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (426 kB)
Using cached networkx-3.5-py3-none-any.whl (2.0 MB)
Using cached opt_einsum-3.4.0-py3-none-any.whl (71 kB)
Using cached pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (6.6 MB)
Using cached platformdirs-4.3.8-py3-none-any.whl (18 kB)
Using cached pynvml-12.0.0-py3-none-any.whl (26 kB)
Using cached nvidia_ml_py-12.575.51-py3-none-any.whl (47 kB)
Using cached setproctitle-1.3.6-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (31 kB)
Using cached types_requests-2.32.4.20250611-py3-none-any.whl (20 kB)
Building wheels for collected packages: ihm
DEPRECATION: Building 'ihm' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the --use-pep517 option, (possibly combined with --no-build-isolation), or adding a pyproject.toml file to the source tree of 'ihm'. Discussion can be found at https://github.com/pypa/pip/issues/6334
Building wheel for ihm (setup.py) ... done
Created wheel for ihm: filename=ihm-2.7-cp312-cp312-linux_x86_64.whl size=235613 sha256=276eef366b760387e335bbf33221e2931f286e1e7176e88c1353d15d762631e0
Stored in directory: /home/xxxx/.cache/pip/wheels/b0/21/5f/1358a14f7c48c79a4060ecce6eb1d2bb78dc72f5e5050edd28
Successfully built ihm
Installing collected packages: pytz, nvidia-ml-py, nvidia-cusparselt-cu12, mpmath, dm-tree, antlr4-python3-runtime, urllib3, tzdata, typing-extensions, triton, tqdm, threadpoolctl, sympy, smmap, six, setproctitle, pyyaml, pynvml, psutil, protobuf, propcache, platformdirs, Pillow, packaging, opt-einsum, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufile-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, multidict, msgpack, MarkupSafe, llvmlite, joblib, idna, gemmi, fsspec, frozenlist, frozendict, filelock, einops, click, charset-normalizer, certifi, attrs, aiohappyeyeballs, yarl, types-requests, sentry-sdk, scipy, requests, rdkit, python-dateutil, omegaconf, nvidia-cusparse-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, numba, mashumaro, lightning-utilities, jinja2, ihm, gitdb, einx, docker-pycreds, cuequivariance_ops_cu12, biopython, aiosignal, scikit-learn, pandas, nvidia-cusolver-cu12, modelcif, hydra-core, gitpython, cuequivariance_ops_torch_cu12, cuequivariance, chembl_structure_pipeline, aiohttp, wandb, torch, cuequivariance_torch, torchmetrics, fairscale, pytorch-lightning, boltz
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
packmol-memgen 2024.3.27 requires matplotlib, which is not installed.
edgembar 3.0 requires matplotlib, which is not installed.
ndfes 3.0 requires matplotlib, which is not installed.
Successfully installed MarkupSafe-3.0.2 Pillow-11.3.0 aiohappyeyeballs-2.6.1 aiohttp-3.12.14 aiosignal-1.4.0 antlr4-python3-runtime-4.9.3 attrs-25.3.0 biopython-1.84 boltz-2.1.1 certifi-2025.7.9 charset-normalizer-3.4.2 chembl_structure_pipeline-1.2.2 click-8.1.7 cuequivariance-0.5.1 cuequivariance_ops_cu12-0.5.1 cuequivariance_ops_torch_cu12-0.5.1 cuequivariance_torch-0.5.1 dm-tree-0.1.8 docker-pycreds-0.4.0 einops-0.8.0 einx-0.3.0 fairscale-0.4.13 filelock-3.18.0 frozendict-2.4.6 frozenlist-1.7.0 fsspec-2025.5.1 gemmi-0.6.5 gitdb-4.0.12 gitpython-3.1.44 hydra-core-1.3.2 idna-3.10 ihm-2.7 jinja2-3.1.6 joblib-1.5.1 lightning-utilities-0.14.3 llvmlite-0.44.0 mashumaro-3.14 modelcif-1.2 mpmath-1.3.0 msgpack-1.1.1 multidict-6.6.3 networkx-3.5 numba-0.61.0 numpy-1.26.4 nvidia-cublas-cu12-12.6.4.1 nvidia-cuda-cupti-cu12-12.6.80 nvidia-cuda-nvrtc-cu12-12.6.77 nvidia-cuda-runtime-cu12-12.6.77 nvidia-cudnn-cu12-9.5.1.17 nvidia-cufft-cu12-11.3.0.4 nvidia-cufile-cu12-1.11.1.6 nvidia-curand-cu12-10.3.7.77 nvidia-cusolver-cu12-11.7.1.2 nvidia-cusparse-cu12-12.5.4.2 nvidia-cusparselt-cu12-0.6.3 nvidia-ml-py-12.575.51 nvidia-nccl-cu12-2.26.2 nvidia-nvjitlink-cu12-12.6.85 nvidia-nvtx-cu12-12.6.77 omegaconf-2.3.0 opt-einsum-3.4.0 packaging-25.0 pandas-2.3.1 platformdirs-4.3.8 propcache-0.3.2 protobuf-5.29.5 psutil-7.0.0 pynvml-12.0.0 python-dateutil-2.9.0.post0 pytorch-lightning-2.5.0 pytz-2025.2 pyyaml-6.0.2 rdkit-2025.3.3 requests-2.32.3 scikit-learn-1.6.1 scipy-1.13.1 sentry-sdk-2.32.0 setproctitle-1.3.6 six-1.17.0 smmap-5.0.2 sympy-1.14.0 threadpoolctl-3.6.0 torch-2.7.1 torchmetrics-1.7.4 tqdm-4.67.1 triton-3.3.1 types-requests-2.32.4.20250611 typing-extensions-4.14.1 tzdata-2025.2 urllib3-2.5.0 wandb-0.18.7 yarl-1.20.1
(boltz2) xxxx@xxxx-Predator-PHN16-71:~$ boltz
Usage: boltz [OPTIONS] COMMAND [ARGS]...
Boltz.
Options: --help Show this message and exit.
Commands: predict Run predictions with Boltz.
Hi, with (boltz2) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ pip3 install triton==3.3.0 Collecting triton==3.3.0 Using cached triton-3.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.5 kB) Requirement already satisfied: setuptools>=40.8.0 in /home/xxxx/anaconda3/envs/boltz2/lib/python3.12/site-packages (from triton==3.3.0) (80.9.0) Using cached triton-3.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (156.5 MB) Installing collected packages: triton Attempting uninstall: triton Found existing installation: triton 3.3.1 Uninstalling triton-3.3.1: Successfully uninstalled triton-3.3.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torch 2.7.1 requires triton==3.3.1; platform_system == "Linux" and platform_machine == "x86_64", but you have triton 3.3.0 which is incompatible. Successfully installed triton-3.3.0
The same error.
Thanks.
Saverio
very strange issue I've try my bestοΌIf you want use boltz2 maybe you should try online service on https://rowansci.com/blog/how-to-run-boltz-2 or follow this page using pixi instead of condaοΌmaybe you should install pixi οΌwhich iβm using just fine
Good luckοΌ yb W
Hi, with pixi exactly the same error.
Saverio
Hi @wyb63136 @xavgit @coreyhowe999 @Shredderroy @xavierholt,
The solution here should be to add the flag --no_kernels to the prediction. The issue occurs in older NVIDIA devices that do not support the latest kernels.
Let me know if you still have issues.
Hi gcorso, thanks for your suggestion. For which version of boltz it should work? I've installed the latest version using a fresh python env. I'm receiving a very long and repeating error calling both boltz predict /home/xxxx/sources/boltz_2.2.0/examples/affinity.yaml --use_msa_server --no_kernels and boltz predict /home/xxxx/sources/boltz_2.2.0/examples/affinity.yaml --use_msa_server
The gpu is: (boltz2-env) xxxx@xxxx-Predator-PHN16-71:~/boltz_2_affinity_example$ lspci -v | grep VGA 0000:00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-S UHD Graphics (rev 04) (prog-if 00 [VGA controller]) 0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD106M [GeForce RTX 4070 Max-Q / Mobile] (rev a1) (prog-if 00 [VGA controller])
I'm wrong somewhere?
Thanks.
Saverio
Hi gcorso, if this can be useful , with boltz 2.1.1 there are no errors.
Thanks for the suggestion.
Saverio
Same problem.
Use 2.1.1 and add no_kernels param works for me.
pip install 'boltz[cuda]==2.1.1' -U
boltz predict examples/prot_custom_msa.yaml --no_kernels
This solution worked well for me https://github.com/jwohlwend/boltz/issues/355. I previously had the same issue, and now it seems to be running (and also faster, as expected).