Gustaf Ahdritz

Results 8 issues of Gustaf Ahdritz

.pdb files in the `uniclust30/` section of the RODA database have erroneous suffixes in their filenames. Instead of `.pdb`, which is what OpenFold expects, they are named `_model_5_unrelaxed.pdb`. In order...

PyTorch Lightning and DeepSpeed LR schedulers don't interact correctly at the moment. Follow [the PL issue](https://github.com/PyTorchLightning/pytorch-lightning/issues/11694) for updates. In the meantime, use `configure_optimizers` in `train_openfold` to add LR scheduling logic.

I've implemented low-memory attention (9670958) using an algorithm from a recent preprint (https://arxiv.org/pdf/2112.05682.pdf), enhanced a little bit with the ability to add multiple biases + batch dimensions. Lacking the JAX...

enhancement
good first issue
help wanted

The `chunk_layer` function in `openfold/utils/tensor_utils.py`, which implements the "chunking" procedure described in subsection 1.11.8 of the Alphafold 2 supplement, relies on a memory-expensive expand/reshape operation at the top to standardize...

enhancement
good first issue
help wanted

As it stands, only the attention primitives `Attention` and `GlobalAttention` are TorchScript-ed (or, for that matter, TorchScript-able) during inference. For better runtimes and memory allocation, more of the network's modules---especially...

enhancement
help wanted

A sample "LLaMA-like" config is provided, but it doesn't use the OlmoLlamaBlock BlockType defined in `model/olmo.py`. Why is that? https://github.com/allenai/OLMo/blob/97296e610c24dd1bb098ec64660dfcafcba62d24/configs/llama7.yaml#L21

### 🐛 Describe the bug I'm trying to finetune OLMO 7B on a single 4xA100 40GB node. I'm using the official config with no repo changes except that I've 1)...

type/bug

`hf_olmo/convert_olmo_to_hf.py` currently crashes if the YAML file in the input checkpoint refers to a local tokenizer (it tries to load the local path from HF). I added a check to...