openfold icon indicating copy to clipboard operation
openfold copied to clipboard

More components of the model should be TorchScript-compatible

Open gahdritz opened this issue 2 years ago • 1 comments

As it stands, only the attention primitives Attention and GlobalAttention are TorchScript-ed (or, for that matter, TorchScript-able) during inference. For better runtimes and memory allocation, more of the network's modules---especially in the Evoformer---should be made compatible with TorchScript. In my estimation, the biggest hurdle before this goal is the inference-time chunking functionality, which currently makes heavy use of function pointers not supported by TorchScript.

gahdritz avatar Nov 15 '21 22:11 gahdritz

Update: EvoformerBlocks are now fully TorchScript-able during training, where the un-scriptable "chunking" procedure is not necessary. However, scripting these components gives no appreciable runtime or memory allocation improvements (if anything, the runtime is slightly worse). Further scripting appears to be impeded by activation checkpointing, which is not supported by TorchScript, and the structure module's heavy use of global NumPy arrays and custom types, which apparently aren't supported either. Someone more knowledgeable about TorchScript might be able to proceed further or squeeze more juice out of the modules that are already scriptable.

gahdritz avatar Nov 19 '21 20:11 gahdritz