davidyanglee
davidyanglee
I think you are seeing this maybe? I noticed that the bottleneck seems to be relaxation and not the unrelaxed prediction: When two GPUs are used, both identical model /...
Hi @epenning, the bottleneck problem is very interesting. Setting the TF_FORCE_UNIFIED_MEMORY=0 or even using the ENABLE_GPU_RELAX=False still gives the same bottle neck, so it is not necessarily GPU or GPU...
@epenning , thanks for verifying. On my machine, I also get the same problem no matter if I set TF_FORCE_UNIFIED_MEMORY=0 or 1, or create separate venv, or use 2 different...
"--use_precomputed_msas=True \" argument works for me but I had to make sure 1). the output directory is very exact: exact directory name as the fasta file as would have been...
@atillack Is there a chart on the TARGETS= to set to? I got the same error msg as above and trying to figure out what the TARGETS=# is for Nvidia...
Actually randomly tested TARGETS="80" for Nvidia 3090 and it worked.
Hi Ragunyrasta, similar problem for me. Have you find your answer? I am curious to know. Thanks! David.
Thank you and this is very helpful! I have 128gb and based on what you said I assume a faster transfer via m2 over pci or sata should help too....
Thank you for the hint! Will play with the number and watch Conky bars for CPU. Best, David On Wed, Feb 2, 2022 at 12:30 PM ragunyrasta ***@***.***> wrote: >...
I am actually running Alphafold and not Rosetta. 100aa takes me about 15 min (features, cpu) + 4min (gpu) x 5 models = 35 min or so. The "reduced size...