openfold icon indicating copy to clipboard operation
openfold copied to clipboard

--trace_model performance

Open mrhoag5 opened this issue 3 years ago • 6 comments
trafficstars

Hi, I have tested using the --trace_model mode on a small batch of sequences of the same length; I get an 80s tracing time followed by 20s inference for each sequence. If I just fold them without --trace_model it takes 18-19s for inference of each. Am I doing something wrong? There doesn't seem to be much documentation about this feature.

mrhoag5 avatar Aug 02 '22 16:08 mrhoag5

This feature is under active development and will be improving soon, but you should still see a speedup using the current version. Which version of torch are you using? To get the most out of tracing it's important that you use torch >= 1.12.0.

gahdritz avatar Aug 02 '22 17:08 gahdritz

Thanks, I'm using 1.12.0+cu116.

mrhoag5 avatar Aug 02 '22 17:08 mrhoag5

What kind of GPU?

gahdritz avatar Aug 02 '22 18:08 gahdritz

I'm using an RTX 3060. I just placed the separate fasta files in the input directory and ran run_pretrained_openfold.py on that directory with --trace_model enabled. Is there anything else I need to do? I also had --use_precomputed_alignments enabled, btw. Thanks for the help.

mrhoag5 avatar Aug 02 '22 19:08 mrhoag5

No, that sounds right. Sit tight until I upload the new version of tracing (should be fairly soon). In the meantime, you can enable use_flash during inference for a speedup.

gahdritz avatar Aug 02 '22 20:08 gahdritz

Sure thing, thanks!

mrhoag5 avatar Aug 02 '22 20:08 mrhoag5

Hi @gahdritz just curious, any updates on this? Is the trace_model usable now?

kvnsng avatar Aug 30 '22 21:08 kvnsng

Whoops forgot to close this. It should work fine now, especially for shorter proteins (< 1000 residues).

gahdritz avatar Aug 30 '22 22:08 gahdritz