openfold
openfold copied to clipboard
--trace_model performance
Hi, I have tested using the --trace_model mode on a small batch of sequences of the same length; I get an 80s tracing time followed by 20s inference for each sequence. If I just fold them without --trace_model it takes 18-19s for inference of each. Am I doing something wrong? There doesn't seem to be much documentation about this feature.
This feature is under active development and will be improving soon, but you should still see a speedup using the current version. Which version of torch are you using? To get the most out of tracing it's important that you use torch >= 1.12.0.
Thanks, I'm using 1.12.0+cu116.
What kind of GPU?
I'm using an RTX 3060. I just placed the separate fasta files in the input directory and ran run_pretrained_openfold.py on that directory with --trace_model enabled. Is there anything else I need to do? I also had --use_precomputed_alignments enabled, btw. Thanks for the help.
No, that sounds right. Sit tight until I upload the new version of tracing (should be fairly soon). In the meantime, you can enable use_flash during inference for a speedup.
Sure thing, thanks!
Hi @gahdritz just curious, any updates on this? Is the trace_model usable now?
Whoops forgot to close this. It should work fine now, especially for shorter proteins (< 1000 residues).