unable to write predictions to .pdb format
Hello all! Thank you very kindly for your latest boltz-2 contribution to the structure prediction world. As per usual, you guys have provided nothing short of gold.
Unfortunately, I think there's a small bug in the pdb writer code. I am able to write structure predictions to disk via mmcif with the --output_format mmcif` call (or leaving it blank), but when I try to do a .pdb, it fails.
$ boltz predict 4_holdouts_no_affinity --output_format pdb --max_parallel_samples 1
Checking input data.
/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/lightning_fabric/plugins/environments/slurm.py:204: The `srun` command is available on your system but is not used. HINT: If your intention is to run Lightning on SLURM, prepend your python command with `srun` like so: srun python3.10 /shared/home/jderoo/miniconda3/envs/boltz2/bin/b ...
Using bfloat16 Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
Running structure prediction for 2 inputs.
/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.5.0.post0, which is newer than your current Lightning version: v2.5.0
You are using a CUDA device ('NVIDIA H100 NVL') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
Predicting DataLoader 0: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/shared/home/jderoo/miniconda3/envs/boltz2/bin/boltz", line 8, in <module>
sys.exit(cli())
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/shared/home/jderoo/projects/test_boltz2/boltz/src/boltz/main.py", line 1186, in predict
trainer.predict(
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in predict
return call._call_and_handle_interrupt(
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 898, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 982, in _run
results = self._run_stage()
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1021, in _run_stage
return self.predict_loop.run()
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 179, in _decorator
return loop_run(self, *args, **kwargs)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 125, in run
self._predict_step(batch, batch_idx, dataloader_idx, dataloader_iter)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 268, in _predict_step
call._call_callback_hooks(trainer, "on_predict_batch_end", predictions, *hook_kwargs.values())
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 222, in _call_callback_hooks
fn(trainer, trainer.lightning_module, *args, **kwargs)
File "/shared/home/jderoo/miniconda3/envs/boltz2/lib/python3.10/site-packages/pytorch_lightning/callbacks/prediction_writer.py", line 156, in on_predict_batch_end
self.write_on_batch_end(trainer, pl_module, outputs, batch_indices, batch, batch_idx, dataloader_idx)
File "/shared/home/jderoo/projects/test_boltz2/boltz/src/boltz/data/write/writer.py", line 164, in write_on_batch_end
to_pdb(new_structure, plddts=plddts, boltz2=self.boltz2)
File "/shared/home/jderoo/projects/test_boltz2/boltz/src/boltz/data/write/pdb.py", line 160, in to_pdb
atom1 = structure.atoms[bond["atom_1"]]
IndexError: invalid index to scalar variable.
Predicting DataLoader 0: 0%| | 0/4 [00:14<?, ?it/s]
Specifically:
File "/shared/home/jderoo/projects/test_boltz2/boltz/src/boltz/data/write/pdb.py", line 160, in to_pdb
atom1 = structure.atoms[bond["atom_1"]]
My structure query did have both a protein and a ligand present. Any advice or a fix would be greatly appreciated!
Best, Jacob
Should be fixed in v2.0.3. Let me know if the problem persists!
Hi Jeremy,
That did it, thank you! I apologize for taking so long - sometimes unfortunately life gets in the way.